summaryrefslogtreecommitdiffstats
path: root/drivers/dma
Commit message (Collapse)AuthorAgeFilesLines
* i.MX31: Image Processing Unit DMA and IRQ driversGuennadi Liakhovetski2009-01-196-0/+2350
| | | | | | | | | | | | | | | | | | | | | | i.MX3x SoCs contain an Image Processing Unit, consisting of a Control Module (CM), Display Interface (DI), Synchronous Display Controller (SDC), Asynchronous Display Controller (ADC), Image Converter (IC), Post-Filter (PF), Camera Sensor Interface (CSI), and an Image DMA Controller (IDMAC). CM contains, among other blocks, an Interrupt Generator (IG) and a Clock and Reset Control Unit (CRCU). This driver serves IDMAC and IG. They are supported over dmaengine and irq-chip APIs respectively. IDMAC is a specialised DMA controller, its DMA channels cannot be used for general-purpose operations, even though it might be possible to configure a memory-to-memory channel for memcpy operation. This driver will not work with generic dmaengine clients, clients, wishing to use it must use respective wrapper structures, they also must specify which channels they require, as channels are hard-wired to specific IPU functions. Acked-by: Sascha Hauer <s.hauer@pengutronix.de> Signed-off-by: Guennadi Liakhovetski <lg@denx.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: kill some dubious WARN_ONCEsDan Williams2009-01-191-6/+0Star
| | | | | | | | | dma_find_channel and dma_issue_pending_all are good places to warn about improper api usage. However, warning correctly means synchronizing with dma_list_mutex, i.e. too much overhead for these fast-path calls. Reported-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: print correct IRQ on mpc83xxPeter Korsgaard2009-01-161-1/+2
| | | | | | | | | | | | The mpc83xx variant uses a shared IRQ for all channels, so the individual channel nodes don't have an interrupt property. Fix the code to print the controller IRQ instead if there isn't any for the channel. Acked-by: Timur Tabi <timur@freescale.com> Acked-by: Li Yang <leoli@freescale.com> Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: check for NO_IRQ in fsl_dma_chan_remove()Peter Korsgaard2009-01-151-1/+2
| | | | | | | | | There's no per-channel IRQ on mpc83xx, so only call free_irq if we have one. Acked-by: Timur Tabi <timur@freescale.com> Acked-by: Li Yang <leoli@freescale.com> Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmatest: Use custom map/unmap for destination bufferAtsushi Nemoto2009-01-131-4/+31
| | | | | | | | | | | The dmatest driver should use DMA_BIDIRECTIONAL on the destination buffer to ensure that the poison values are written to RAM and not just written to cache and discarded. Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* fsldma: use a valid 'device' for dma_pool_createDan Williams2009-01-121-1/+1
| | | | | | | | | | | | | | | | | | The dmaengine sysfs implementation was fixed to support proper lifetime rules which means that the current: new_fsl_chan->dev = &new_fsl_chan->common.dev->device; ...retrieves a NULL pointer because new_fsl_chan->common.dev has not been allocated at this point. So, set new_fsl_chan->dev to a valid device. Cc: Li Yang <leoli@freescale.com> Cc: Zhang Wei <zw@zh-kernel.org> Reported-by: Ira Snyder <iws@ovro.caltech.edu> Tested-by: Ira Snyder <iws@ovro.caltech.edu> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: fix dependency chainingYuri Tikhonov2009-01-121-0/+2
| | | | | | | | | | | | | | | | | | | In dmaengine we track the dependencies between the descriptors using the 'next' pointers of the structure. These pointers are set to NULL as soon as the corresponding descriptor has been submitted to the channel (in dma_run_dependencies()). But, the first 'next' in chain is still remaining set, regardless the fact, that tx->next has been already submitted. This may lead to multiple submissions of the same descriptor. This patch fixes this. Actually, some previous implementation of the xxx_run_dependencies() function already had this fix in place. The fdb..0eaf3 commit, beside the correct things, broke this. Cc: <stable@kernel.org> Signed-off-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: fix self test for multi-channel caseDan Williams2009-01-061-6/+7
| | | | | | | | | | | In the multiple device case we need to re-arm the completion and protect against concurrent self-tests. The printk from the test callback is removed as it can arbitrarily delay completion of the test. Cc: <stable@kernel.org> Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: bump initcall level to arch_initcallDan Williams2009-01-061-2/+2
| | | | | | | | | | | | | | | There are dmaengine users that would like to register dma devices at subsys_initcall time to ensure channels are available by device_initcall time. Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Cc: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Cc: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: advertise all channels on a device to dma_filter_fnDan Williams2009-01-061-20/+13Star
| | | | | | | | | | | | | Allow dma_filter_fn routines to disambiguate multiple channels on a device rather than assuming that all channels on a device are equal. Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Reported-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: use idr for registering dma device numbersDan Williams2009-01-061-2/+25
| | | | | | | | | | | | | This brings some predictability to dma device numbers, i.e. an rmmod/insmod cycle may now result in /sys/class/dma/dma0chan0 being restored rather than /sys/class/dma/dma1chan0 appearing. Cc: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: add a release for dma class devices and dependent infrastructureDan Williams2009-01-064-72/+141
| | | | | | | | | | | | | | | | | | | | | | Resolves: WARNING: at drivers/base/core.c:122 device_release+0x4d/0x52() Device 'dma0chan0' does not have a release() function, it is broken and must be fixed. The dma_chan_dev object is introduced to gear-match sysfs kobject and dmaengine channel lifetimes. When a channel is removed access to the sysfs entries return -ENODEV until the kobject can be released. The bulk of the change is updates to existing code to handle the extra layer of indirection between a dma_chan and its struct device. Reported-by: Alexander Beregalov <a.beregalov@gmail.com> Acked-by: Stephen Hemminger <shemminger@vyatta.com> Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: do not perform removal actions at shutdownDan Williams2009-01-061-58/+34Star
| | | | | | | | | | | | Unregistering services should only happen at "remove" time. This prevents the device from being unregistered while dmaengine clients are still active. Also, the comment on ioat_remove is stale since removal is prevented while a channel may be in use. Reported-by: Alexander Beregalov <a.beregalov@gmail.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: enable module removalDan Williams2009-01-061-4/+0Star
| | | | | | Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: kill debug BUG_ONDan Williams2009-01-061-2/+0Star
| | | | | | | | | This BUG_ON caught problems in early development but now it is in the way as it invalidly triggers when trying to remove the module. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: let devm do its job, don't duplicate freeDan Williams2009-01-061-13/+0Star
| | | | | | | | No need to free stuff that the devm infrastructure will take care of... Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: kill enum dma_state_clientDan Williams2009-01-062-14/+8Star
| | | | | | | | | DMA_NAK is now useless. We can just use a bool instead. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: remove 'bigref' infrastructureDan Williams2009-01-063-80/+9Star
| | | | | | | | | | Reference counting is done at the module level so clients need not worry that a channel will leave while they are actively using dmaengine. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: kill struct dma_client and supporting infrastructureDan Williams2009-01-066-86/+13Star
| | | | | | | | | | | | All users have been converted to either the general-purpose allocator, dma_find_channel, or dma_request_channel. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: replace dma_async_client_register with dmaengine_getDan Williams2009-01-061-16/+6Star
| | | | | | | | | | Now that clients no longer need to be notified of channel arrival dma_async_client_register can simply increment the dmaengine_ref_count. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* atmel-mci: convert to dma_request_channel and down-level dma_slaveDan Williams2009-01-062-27/+6Star
| | | | | | | | | | | dma_request_channel provides an exclusive channel, so we no longer need to pass slave data through dmaengine. Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmatest: convert to dma_request_channelDan Williams2009-01-061-72/+43Star
| | | | | | | | | | | | | | | | | Replace the client registration infrastructure with a custom loop to poll for channels. Once dma_request_channel returns NULL stop asking for channels. A userspace side effect of this change if that loading the dmatest module before loading a dma driver will result in no channels being found, previously dmatest would get a callback. To facilitate testing in the built-in case dmatest_init is marked as a late_initcall. Another side effect is that channels under test can not be used for any other purpose. Cc: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: introduce dma_request_channel and private channelsDan Williams2009-01-061-16/+139
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This interface is primarily for device-to-memory clients which need to search for dma channels with platform-specific characteristics. The prototype is: struct dma_chan *dma_request_channel(dma_cap_mask_t mask, dma_filter_fn filter_fn, void *filter_param); When the optional 'filter_fn' parameter is set to NULL dma_request_channel simply returns the first channel that satisfies the capability mask. Otherwise, when the mask parameter is insufficient for specifying the necessary channel, the filter_fn routine can be used to disposition the available channels in the system. The filter_fn routine is called once for each free channel in the system. Upon seeing a suitable channel filter_fn returns DMA_ACK which flags that channel to be the return value from dma_request_channel. A channel allocated via this interface is exclusive to the caller, until dma_release_channel() is called. To ensure that all channels are not consumed by the general-purpose allocator the DMA_PRIVATE capability is provided to exclude a dma_device from general-purpose (memory-to-memory) consideration. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: provide a common 'issue_pending_all' implementationDan Williams2009-01-061-3/+24
| | | | | | | | | | | | | | | async_tx and net_dma each have open-coded versions of issue_pending_all, so provide a common routine in dmaengine. The implementation needs to walk the global device list, so implement rcu to allow dma_issue_pending_all to run lockless. Clients protect themselves from channel removal events by holding a dmaengine reference. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: centralize channel allocation, introduce dma_find_channelDan Williams2009-01-061-1/+167
| | | | | | | | | | | | | | | | | | | Allowing multiple clients to each define their own channel allocation scheme quickly leads to a pathological situation. For memory-to-memory offload all clients can share a central allocator. This simply moves the existing async_tx allocator to dmaengine with minimal fixups: * async_tx.c:get_chan_ref_by_cap --> dmaengine.c:nth_chan * async_tx.c:async_tx_rebalance --> dmaengine.c:dma_channel_rebalance * split out common code from async_tx.c:__async_tx_find_channel --> dma_find_channel Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: up-level reference counting to the module levelDan Williams2009-01-063-80/+129
| | | | | | | | | | | | | | | | | | | | | | | | | Simply, if a client wants any dmaengine channel then prevent all dmaengine modules from being removed. Once the clients are done re-enable module removal. Why?, beyond reducing complication: 1/ Tracking reference counts per-transaction in an efficient manner, as is currently done, requires a complicated scheme to avoid cache-line bouncing effects. 2/ Per-transaction ref-counting gives the false impression that a dma-driver can be gracefully removed ahead of its user (net, md, or dma-slave) 3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but if such an engine were built one day we still would not need to notify clients of remove events. The driver can simply return NULL to a ->prep() request, something that is much easier for a client to handle. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: remove dependency on async_txDan Williams2009-01-064-6/+86
| | | | | | | | | | | | | | async_tx.ko is a consumer of dma channels. A circular dependency arises if modules in drivers/dma rely on common code in async_tx.ko. It prevents either module from being unloaded. Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o where they should have been from the beginning. Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* async_xor: dma_map destination DMA_BIDIRECTIONALDan Williams2008-12-082-6/+25
| | | | | | | | | | | | | | Mapping the destination multiple times is a misuse of the dma-api. Since the destination may be reused as a source, ensure that it is only mapped once and that it is mapped bidirectionally. This appears to add ugliness on the unmap side in that it always reads back the destination address from the descriptor, but gcc can determine that dma_unmap is a nop and not emit the code that calculates its arguments. Cc: <stable@kernel.org> Cc: Saeed Bishara <saeed@marvell.com> Acked-by: Yuri Tikhonov <yur@emcraft.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: protect 'id' from concurrent registrationsDan Williams2008-12-041-0/+3
| | | | | | | | | | There is a possibility to have two devices registered with the same id. Cc: <stable@kernel.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* ioat: wait for self-test completionDan Williams2008-12-041-1/+4
| | | | | | | | | | | As part of the ioat_dma self-test it performs a printk from a completion callback. Depending on the system console configuration this output can take longer than a millisecond causing the self-test to fail. Introduce a completion with a generous timeout to mitigate this failure. Cc: <stable@kernel.org> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* dmaengine: struct device - replace bus_id with dev_name(), dev_set_name()Kay Sievers2008-11-112-13/+13
| | | | | | | Acked-by: Greg Kroah-Hartman <gregkh@suse.de> Acked-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: Kay Sievers <kay.sievers@vrfy.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: use iop_paranoia() for debug BUG_ONsDan Williams2008-11-111-1/+1
| | | | | | | | Now that the critical read back to flush the next descriptor address is fixed we can downgrade some BUG_ONs that need only be enabled when testing changes to the driver. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* iop-adma: add a dummy read to flush next descriptor updateDan Williams2008-11-111-4/+5
| | | | | | | | | | | The current dummy read references the wrong address allowing the next descriptor address update to linger in the store buffer and get passed by an 'append' event. This issue was uncovered by the change from strongly-ordered to device memory for the adma registers. Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* [3/4] I/OAT: fix async_tx.callback checkingMaciej Sosnowski2008-11-111-2/+2
| | | | | | | | | async_tx.callback should be checked for the first not the last descriptor in the chain. Cc: <stable@kernel.org> Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [2/4] I/OAT: fix dma_pin_iovec_pages() error handlingMaciej Sosnowski2008-11-111-11/+6Star
| | | | | | | | | | | | | Error handling needs to be modified in dma_pin_iovec_pages(). It should return NULL instead of ERR_PTR (pinned_list is checked for NULL in tcp_recvmsg() to determine if iovec pages have been successfully pinned down). In case of error for the first iovec, local_list->nr_iovecs needs to be initialized. Cc: <stable@kernel.org> Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* [1/4] I/OAT: fix channel resources free for not allocated channelsMaciej Sosnowski2008-11-111-0/+7
| | | | | | | | | | | | If the ioatdma driver is loaded but not used it does not allocate descriptors. Before it frees channel resources it should first be sure that they have been previously allocated. Cc: <stable@kernel.org> Signed-off-by: Maciej Sosnowski <maciej.sosnowski@intel.com> Tested-by: Tom Picard <tom.s.picard@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge branch 'i7300_idle' into releaseLen Brown2008-10-251-2/+5
|\
| * i7300_idle: Fix compile warning CONFIG_I7300_IDLE_IOAT_CHANNEL not definedVenki Pallipadi2008-10-241-1/+1
| | | | | | | | | | | | | | | | | | | | When I7300_idle driver is not configured, there is a compile time warning about IDLE_IOAT_CHANNEL not defined. Fix it. Reported-by: Suresh Siddha <suresh.b.siddha@intel.com> Reported-by: Arjan van de Ven <arjan@linux.intel.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
| * i7300_idle: Disable ioat channel only on platforms where ile driver can loadVenki Pallipadi2008-10-241-1/+4
| | | | | | | | | | | | | | | | | | Based on input from Andi Kleen: share the platform detection code with ioat_dma and disable the channel in dma engine only for specific platforms. Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: Len Brown <len.brown@intel.com>
* | Merge branch 'linus' into testLen Brown2008-10-235-190/+100Star
|\ \ | |/ |/| | | | | | | | | | | | | | | | | | | Conflicts: MAINTAINERS arch/x86/kernel/acpi/boot.c arch/x86/kernel/acpi/sleep.c drivers/acpi/Kconfig drivers/pnp/Makefile drivers/pnp/quirks.c Signed-off-by: Len Brown <len.brown@intel.com>
| * Merge branch 'next' of ↵Linus Torvalds2008-10-205-190/+100Star
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx * 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/djbw/async_tx: fsldma: allow Freescale Elo DMA driver to be compiled as a module fsldma: remove internal self-test from Freescale Elo DMA driver drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNEL dmatest: properly handle duplicate DMA channels drivers/dma/ioat_dma.c: drop code after return async_tx: make async_tx_run_dependencies() easier to read
| | * fsldma: allow Freescale Elo DMA driver to be compiled as a moduleTimur Tabi2008-09-273-55/+94
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Modify the Freescale Elo / Elo Plus DMA driver so that it can be compiled as a module. The primary change is to stop treating the DMA controller as a bus, and the DMA channels as devices on the bus. This is because the Open Firmware (OF) kernel code does not allow busses to be removed, so although we can call of_platform_bus_probe() to probe the DMA channels, there is no of_platform_bus_remove(). Instead, the DMA channels are manually probed, similar to what fsl_elbc_nand.c does. Cc: Scott Wood <scottwood@freescale.com> Acked-by: Li Yang <leoli@freescale.com> Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| | * fsldma: remove internal self-test from Freescale Elo DMA driverTimur Tabi2008-09-241-132/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Freescale Elo DMA driver runs an internal self-test before registering the channels with the DMA engine. This self-test has a fundemental flaw in that it calls the DMA engine's callback functions directly before the registration. However, the registration initializes some variables that the callback functions uses, namely the device struct. The code works today because there are two device structs: the one created by the DMA engine, and one created by the Open Firmware (OF) subsystem. The self-test currently uses the device struct created by OF. However, in the future, some of the device structs created by OF will be eliminated. This means that the self-test will only have access to the device struct created by the DMA engine. But this device struct isn't initialized when the self-test runs, and this causes a kernel panic. Since there is already a DMA test module (dmatest), the internal self-test code is not useful anyway. It is extremely unlikely that the test will fail in normal usage. It may have been helpful during development, but not any more. Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Li Yang <leoli@freescale.com> Cc: Scott Wood <scottwood@freescale.com> Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| | * drivers/dma/dmatest.c: switch a GFP_ATOMIC to GFP_KERNELAndrew Morton2008-09-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | It was needlessly using the unreliable GFP_ATOMIC. Cc: Timur Tabi <timur@freescale.com> Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| | * dmatest: properly handle duplicate DMA channelsTimur Tabi2008-09-191-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Update the the dmatest driver so that it handles duplicate DMA channels properly. When a DMA client is notified of an available DMA channel, it must check if it has already allocated resources for that channel. If so, it should return DMA_DUP. This can happen, for example, if a DMA driver calls dma_async_device_register() more than once. Acked-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Signed-off-by: Timur Tabi <timur@freescale.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
| | * drivers/dma/ioat_dma.c: drop code after returnJulia Lawall2008-09-141-2/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | The break after the return serves no purpose. Signed-off-by: Julia Lawall <julia@diku.dk> Reviewed-by: Richard Genoud <richard.genoud@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Dan Williams <dan.j.williams@intel.com>
* | | i7300_idle driver v1.55Andy Henroid2008-10-221-0/+3
|/ / | | | | | | | | | | | | | | | | | | | | | | | | The Intel 7300 Memory Controller supports dynamic throttling of memory which can be used to save power when system is idle. This driver does the memory throttling when all CPUs are idle on such a system. Refer to "Intel 7300 Memory Controller Hub (MCH)" datasheet for the config space description. Signed-off-by: Andy Henroid <andrew.d.henroid@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
* / dw_dmac: fix copy/paste bug in taskletHaavard Skinnemoen2008-10-041-1/+1
|/ | | | | | | | | | | The tasklet checks RAW.BLOCK twice, and does not check RAW.XFER. This is obviously wrong, and could theoretically cause the driver to hang. Reported-by: Nicolas Ferre <nicolas.ferre@atmel.com> Signed-off-by: Haavard Skinnemoen <haavard.skinnemoen@atmel.com> Acked-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-rmk' of git://git.marvell.com/orionRussell King2008-08-091-1/+1
|\
| * [ARM] Move include/asm-arm/plat-orion to arch/arm/plat-orion/include/platLennert Buytenhek2008-08-091-1/+1
| | | | | | | | | | | | | | This patch performs the equivalent include directory shuffle for plat-orion, and fixes up all users. Signed-off-by: Lennert Buytenhek <buytenh@marvell.com>