summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| * i40iw: Change accelerated flag to boolHenry Orosco2017-12-282-3/+3
| | | | | | | | | | | | | | | | | | | | The accelerated flag only utilizes two values: 0 and 1. Modify accelerated flag in struct i40iw_cm_node to bool. Signed-off-by: Henry Orosco <henry.orosco@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/mlx5: Fix mlx5_ib_alloc_mr error flowNitzan Carmi2017-12-271-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | ibmr.device is being set only after ib_alloc_mr() is (successfully) complete. Therefore, in case mlx5_core_create_mkey() return with error, the error flow calls mlx5_free_priv_descs() which uses ibmr.device (which doesn't exist yet), causing a NULL dereference oops. To fix this, the IB device should be set in the mr struct earlier stage (e.g. prior to calling mlx5_core_create_mkey()). Fixes: 8a187ee52b04 ("IB/mlx5: Support the new memory registration API") Signed-off-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Nitzan Carmi <nitzanc@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/core: Verify that QP is security enabled in create and destroyMoni Shoua2017-12-272-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The XRC target QP create flow sets up qp_sec only if there is an IB link with LSM security enabled. However, several other related uAPI entry points blindly follow the qp_sec NULL pointer, resulting in a possible oops. Check for NULL before using qp_sec. Cc: <stable@vger.kernel.org> # v4.12 Fixes: d291f1a65232 ("IB/core: Enforce PKey security on QPs") Reviewed-by: Daniel Jurgens <danielj@mellanox.com> Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/uverbs: Fix command checking as part of ib_uverbs_ex_modify_qp()Moni Shoua2017-12-271-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If the input command length is larger than the kernel supports an error should be returned in case the unsupported bytes are not cleared, instead of the other way aroudn. This matches what all other callers of ib_is_udata_cleared do and will avoid user ABI problems in the future. Cc: <stable@vger.kernel.org> # v4.10 Fixes: 189aba99e700 ("IB/uverbs: Extend modify_qp and support packet pacing") Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/mlx5: Serialize access to the VMA listMajd Dibbiny2017-12-272-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | User-space applications can do mmap and munmap directly at any time. Since the VMA list is not protected with a mutex, concurrent accesses to the VMA list from the mmap and munmap can cause data corruption. Add a mutex around the list. Cc: <stable@vger.kernel.org> # v4.7 Fixes: 7c2344c3bbf9 ("IB/mlx5: Implements disassociate_ucontext API") Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/hfi: Only read capability registers if the capability existsMichael J. Ruhl2017-12-222-19/+12Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During driver init, various registers are saved to allow restoration after an FLR or gen3 bump. Some of these registers are not available in some circumstances (i.e. Virtual machines). This bug makes the driver unusable when the PCI device is passed into a VM, it fails during probe. Delete unnecessary register read/write, and only access register if the capability exists. Cc: <stable@vger.kernel.org> # 4.14.x Fixes: a618b7e40af2 ("IB/hfi1: Move saving PCI values to a separate function") Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/ipoib: Fix lockdep issue found on ipoib_ib_dev_heavy_flushAlex Vesker2017-12-221-4/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The locking order of vlan_rwsem (LOCK A) and then rtnl (LOCK B), contradicts other flows such as ipoib_open possibly causing a deadlock. To prevent this deadlock heavy flush is called with RTNL locked and only then tries to acquire vlan_rwsem. This deadlock is possible only when there are child interfaces. [ 140.941758] ====================================================== [ 140.946276] WARNING: possible circular locking dependency detected [ 140.950950] 4.15.0-rc1+ #9 Tainted: G O [ 140.954797] ------------------------------------------------------ [ 140.959424] kworker/u32:1/146 is trying to acquire lock: [ 140.963450] (rtnl_mutex){+.+.}, at: [<ffffffffc083516a>] __ipoib_ib_dev_flush+0x2da/0x4e0 [ib_ipoib] [ 140.970006] but task is already holding lock: [ 140.975141] (&priv->vlan_rwsem){++++}, at: [<ffffffffc0834ee1>] __ipoib_ib_dev_flush+0x51/0x4e0 [ib_ipoib] [ 140.982105] which lock already depends on the new lock. [ 140.990023] the existing dependency chain (in reverse order) is: [ 140.998650] -> #1 (&priv->vlan_rwsem){++++}: [ 141.005276] down_read+0x4d/0xb0 [ 141.009560] ipoib_open+0xad/0x120 [ib_ipoib] [ 141.014400] __dev_open+0xcb/0x140 [ 141.017919] __dev_change_flags+0x1a4/0x1e0 [ 141.022133] dev_change_flags+0x23/0x60 [ 141.025695] devinet_ioctl+0x704/0x7d0 [ 141.029156] sock_do_ioctl+0x20/0x50 [ 141.032526] sock_ioctl+0x221/0x300 [ 141.036079] do_vfs_ioctl+0xa6/0x6d0 [ 141.039656] SyS_ioctl+0x74/0x80 [ 141.042811] entry_SYSCALL_64_fastpath+0x1f/0x96 [ 141.046891] -> #0 (rtnl_mutex){+.+.}: [ 141.051701] lock_acquire+0xd4/0x220 [ 141.055212] __mutex_lock+0x88/0x970 [ 141.058631] __ipoib_ib_dev_flush+0x2da/0x4e0 [ib_ipoib] [ 141.063160] __ipoib_ib_dev_flush+0x71/0x4e0 [ib_ipoib] [ 141.067648] process_one_work+0x1f5/0x610 [ 141.071429] worker_thread+0x4a/0x3f0 [ 141.074890] kthread+0x141/0x180 [ 141.078085] ret_from_fork+0x24/0x30 [ 141.081559] other info that might help us debug this: [ 141.088967] Possible unsafe locking scenario: [ 141.094280] CPU0 CPU1 [ 141.097953] ---- ---- [ 141.101640] lock(&priv->vlan_rwsem); [ 141.104771] lock(rtnl_mutex); [ 141.109207] lock(&priv->vlan_rwsem); [ 141.114032] lock(rtnl_mutex); [ 141.116800] *** DEADLOCK *** Fixes: b4b678b06f6e ("IB/ipoib: Grab rtnl lock on heavy flush when calling ndo_open/stop") Signed-off-by: Alex Vesker <valex@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/mlx5: Fix congestion counters in LAG modeMajd Dibbiny2017-12-225-42/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | Congestion counters are counted and queried per physical function. When working in LAG mode, CNP packets can be sent or received on both of the functions, thus congestion counters should be aggregated from the two physical functions. Fixes: e1f24a79f424 ("IB/mlx5: Support congestion related counters") Signed-off-by: Majd Dibbiny <majd@mellanox.com> Reviewed-by: Aviv Heller <avivh@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/vmw_pvrdma: Avoid use after free due to QP/CQ/SRQ destroyBryan Tan2017-12-225-22/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | The use of wait queues in vmw_pvrdma for handling concurrent access to a resource leaves a race condition which can cause a use after free bug. Fix this by using the pattern from other drivers, complete() protected by dec_and_test to ensure complete() is called only once. Fixes: 29c8d9eba550 ("IB: Add vmw_pvrdma driver") Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/vmw_pvrdma: Use refcount_dec_and_test to avoid warningBryan Tan2017-12-221-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | refcount_dec generates a warning when the operation causes the refcount to hit zero. Avoid this by using refcount_dec_and_test. Fixes: 8b10ba783c9d ("RDMA/vmw_pvrdma: Add shared receive queue support") Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/vmw_pvrdma: Call ib_umem_release on destroy QP pathBryan Tan2017-12-221-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | The QP cleanup did not previously call ib_umem_release, resulting in a user-triggerable kernel resource leak. Fixes: 29c8d9eba550 ("IB: Add vmw_pvrdma driver") Reviewed-by: Adit Ranadive <aditr@vmware.com> Reviewed-by: Aditya Sarwade <asarwade@vmware.com> Reviewed-by: Jorgen Hansen <jhansen@vmware.com> Signed-off-by: Bryan Tan <bryantan@vmware.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * iw_cxgb4: when flushing, complete all wrs in a chainSteve Wise2017-12-221-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | If a wr chain was posted and needed to be flushed, only the first wr in the chain was completed with FLUSHED status. The rest were never completed. This caused isert to hang on shutdown due to the missing completions which left iscsi IO commands referenced, stalling the shutdown. Fixes: 4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic") Cc: stable@vger.kernel.org Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * iw_cxgb4: reflect the original WR opcode in drain cqesSteve Wise2017-12-224-11/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The flush/drain logic was not retaining the original wr opcode in its completion. This can cause problems if the application uses the completion opcode to make decisions. Use bit 10 of the CQE header word to indicate the CQE is a special drain completion, and save the original WR opcode in the cqe header opcode field. Fixes: 4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic") Cc: stable@vger.kernel.org Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * iw_cxgb4: Only validate the MSN for successful completionsSteve Wise2017-12-221-3/+3
| | | | | | | | | | | | | | | | | | | | If the RECV CQE is in error, ignore the MSN check. This was causing recvs that were flushed into the sw cq to be completed with the wrong status (BAD_MSN instead of FLUSHED). Cc: stable@vger.kernel.org Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/ipoib: Restore MM behavior in case of tx_ring allocation failureYuval Shaia2017-12-131-0/+1
| | | | | | | | | | | | | | | | | | memalloc_noio_save modifies the behavior of MM, we must restore it after we are done. Fixes: d83187dda9b9 ("IB/IPoIB: Convert IPoIB to memalloc_noio_* calls") Signed-off-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * iw_cxgb4: only insert drain cqes if wq is flushedSteve Wise2017-12-112-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only insert our special drain CQEs to support ib_drain_sq/rq() after the wq is flushed. Otherwise, existing but not yet polled CQEs can be returned out of order to the user application. This can happen when the QP has exited RTS but not yet flushed the QP, which can happen during a normal close (vs abortive close). In addition never count the drain CQEs when determining how many CQEs need to be synthesized during the flush operation. This latter issue should never happen if the QP is properly flushed before inserting the drain CQE, but I wanted to avoid corrupting the CQ state. So we handle it and log a warning once. Fixes: 4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic") Signed-off-by: Steve Wise <swise@opengridcomputing.com> Cc: stable@vger.kernel.org Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * iw_cxgb4: only clear the ARMED bit if a notification is neededSteve Wise2017-12-071-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In __flush_qp(), the CQ ARMED bit was being cleared regardless of whether any notification is actually needed. This resulted in the iser termination logic getting stuck in ib_drain_sq() because the CQ was not marked ARMED and thus the drain CQE notification wasn't triggered. This new bug was exposed when this commit was merged: commit cbb40fadd31c ("iw_cxgb4: only call the cq comp_handler when the cq is armed") Signed-off-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/netlink: Fix general protection faultLeon Romanovsky2017-12-074-4/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The RDMA netlink core code checks validity of messages by ensuring that type and operand are in range. It works well for almost all clients except NLDEV, which has cb_table less than number of operands. Request to access such operand will trigger the following kernel panic. This patch updates all places where cb_table is declared for the consistency, but only NLDEV is actually need it. general protection fault: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN Modules linked in: CPU: 0 PID: 522 Comm: syz-executor6 Not tainted 4.13.0+ #4 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.7.5-0-ge51488c-20140602_164612-nilsson.home.kraxel.org 04/01/2014 task: ffff8800657799c0 task.stack: ffff8800695d000 RIP: 0010:rdma_nl_rcv_msg+0x13a/0x4c0 RSP: 0018:ffff8800695d7838 EFLAGS: 00010207 RAX: dffffc0000000000 RBX: 1ffff1000d2baf0b RCX: 00000000704ff4d7 RDX: 0000000000000000 RSI: ffffffff81ddb03c RDI: 00000003827fa6bc RBP: ffff8800695d7900 R08: ffffffff82ec0578 R09: 0000000000000000 R10: ffff8800695d7900 R11: 0000000000000001 R12: 000000000000001c R13: ffff880069d31e00 R14: 00000000ffffffff R15: ffff880069d357c0 FS: 00007fee6acb8700(0000) GS:ffff88006ca00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000201a9000 CR3: 0000000059766000 CR4: 00000000000006b0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: ? rdma_nl_multicast+0x80/0x80 rdma_nl_rcv+0x36b/0x4d0 ? ibnl_put_attr+0xc0/0xc0 netlink_unicast+0x4bd/0x6d0 ? netlink_sendskb+0x50/0x50 ? drop_futex_key_refs.isra.4+0x68/0xb0 netlink_sendmsg+0x9ab/0xbd0 ? nlmsg_notify+0x140/0x140 ? wake_up_q+0xa1/0xf0 ? drop_futex_key_refs.isra.4+0x68/0xb0 sock_sendmsg+0x88/0xd0 sock_write_iter+0x228/0x3c0 ? sock_sendmsg+0xd0/0xd0 ? do_futex+0x3e5/0xb20 ? iov_iter_init+0xaf/0x1d0 __vfs_write+0x46e/0x640 ? sched_clock_cpu+0x1b/0x190 ? __vfs_read+0x620/0x620 ? __fget+0x23a/0x390 ? rw_verify_area+0xca/0x290 vfs_write+0x192/0x490 SyS_write+0xde/0x1c0 ? SyS_read+0x1c0/0x1c0 ? trace_hardirqs_on_thunk+0x1a/0x1c entry_SYSCALL_64_fastpath+0x18/0xad RIP: 0033:0x7fee6a74a219 RSP: 002b:00007fee6acb7d58 EFLAGS: 00000212 ORIG_RAX: 0000000000000001 RAX: ffffffffffffffda RBX: 0000000000638000 RCX: 00007fee6a74a219 RDX: 0000000000000078 RSI: 0000000020141000 RDI: 0000000000000006 RBP: 0000000000000046 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000212 R12: ffff8800695d7f98 R13: 0000000020141000 R14: 0000000000000006 R15: 00000000ffffffff Code: d6 48 b8 00 00 00 00 00 fc ff df 66 41 81 e4 ff 03 44 8d 72 ff 4a 8d 3c b5 c0 a6 7f 82 44 89 b5 4c ff ff ff 48 89 f9 48 c1 e9 03 <0f> b6 0c 01 48 89 f8 83 e0 07 83 c0 03 38 c8 7c 08 84 c9 0f 85 RIP: rdma_nl_rcv_msg+0x13a/0x4c0 RSP: ffff8800695d7838 ---[ end trace ba085d123959c8ec ]--- Kernel panic - not syncing: Fatal exception Cc: syzkaller <syzkaller@googlegroups.com> Fixes: b4c598a67ea1 ("RDMA/netlink: Implement nldev device dumpit calback") Reviewed-by: Mark Bloch <markb@mellanox.com> Signed-off-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
| * IB/mlx4: Fix RSS hash fields restrictionsGuy Levi2017-12-071-7/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Mistakenly the driver didn't allow RSS hash fields combinations which involve both IPv4 and IPv6 protocols. This bug caused to failures for user's use cases for RSS. Consequently, this patch fixes this bug and allows any combination that the HW can support. Additionally, the patch fixes the driver to return an error in case the user provides an unsupported mask for RSS hash fields. Fixes: 3078f5f1bd8b ("IB/mlx4: Add support for RSS QP") Signed-off-by: Guy Levi <guyle@mellanox.com> Reviewed-by: Yishai Hadas <yishaih@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
| * IB/core: Don't enforce PKey security on SMI MADsDaniel Jurgens2017-12-071-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Per the infiniband spec an SMI MAD can have any PKey. Checking the pkey on SMI MADs is not necessary, and it seems that some older adapters using the mthca driver don't follow the convention of using the default PKey, resulting in false denials, or errors querying the PKey cache. SMI MAD security is still enforced, only agents allowed to manage the subnet are able to receive or send SMI MADs. Reported-by: Chris Blake <chrisrblake93@gmail.com> Cc: <stable@vger.kernel.org> # v4.12 Fixes: 47a2b338fe63 ("IB/core: Enforce security on management datagrams") Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
| * IB/core: Bound check alternate path port numberDaniel Jurgens2017-12-071-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | The alternate port number is used as an array index in the IB security implementation, invalid values can result in a kernel panic. Cc: <stable@vger.kernel.org> # v4.12 Fixes: d291f1a65232 ("IB/core: Enforce PKey security on QPs") Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Doug Ledford <dledford@redhat.com>
| * IB/core: Only enforce security for InfiniBandDaniel Jurgens2017-12-011-4/+46
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For now the only LSM security enforcement mechanism available is specific to InfiniBand. Bypass enforcement for non-IB link types. This fixes a regression where modify_qp fails for iWARP because querying the PKEY returns -EINVAL. Cc: Paul Moore <paul@paul-moore.com> Cc: Don Dutile <ddutile@redhat.com> Cc: stable@vger.kernel.org Reported-by: Potnuri Bharat Teja <bharat@chelsio.com> Fixes: d291f1a65232("IB/core: Enforce PKey security on QPs") Fixes: 47a2b338fe63("IB/core: Enforce security on management datagrams") Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Tested-by: Potnuri Bharat Teja <bharat@chelsio.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/hns: Get rid of page operation after dma_alloc_coherentWei Hu\(Xavier\)2017-12-012-12/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In general, dma_alloc_coherent() returns a CPU virtual address and a DMA address, and we have no guarantee that the underlying memory even has an associated struct page at all. This patch gets rid of the page operation after dma_alloc_coherent, and records the VA returned form dma_alloc_coherent in the struct of hem in hns RoCE driver. Fixes: 9a44353("IB/hns: Add driver files for hns RoCE driver") Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Shaobo Xu <xushaobo2@huawei.com> Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Xiping Zhang (Francis) <zhangxiping3@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/hns: Get rid of virt_to_page and vmap calls after dma_alloc_coherentWei Hu\(Xavier\)2017-12-012-26/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In general dma_alloc_coherent() returns a CPU virtual address and a DMA address, and we have no guarantee that the virtual address is either in the linear map or vmalloc. It could be in some other special place. We have no guarantee that the underlying memory even has an associated struct page at all. In current code, there are incorrect usage as below: dma_alloc_coherent + virt_to_page + vmap. There will probably introduce coherency problem. This patch fixes it to get rid of virt_to_page and vmap calls at Leon's suggestion. The related link: https://lkml.org/lkml/2017/11/7/34 Fixes: 9a44353("IB/hns: Add driver files for hns RoCE driver") Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Shaobo Xu <xushaobo2@huawei.com> Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Xiping Zhang (Francis) <zhangxiping3@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/hns: Fix the issue of IOVA not page continuous in hip08Wei Hu\(Xavier\)2017-12-011-7/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the smmu is enabled, the length of sg obtained from __iommu_map_sg_attrs is not 4kB. When the IOVA is set with the sg dma address, the IOVA will not be page continuous. so, the current code has MTPT configuration error that probably cause dma operation failure. In order to fix this issue, the IOVA should be calculated based on the sg length. Fixes: 3958cc5("RDMA/hns: Configure the MTPT in hip08") Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com> Signed-off-by: Shaobo Xu <xushaobo2@huawei.com> Signed-off-by: Lijun Ou <oulijun@huawei.com> Signed-off-by: Yixian Liu <liuyixian@huawei.com> Signed-off-by: Xiping Zhang (Francis) <zhangxiping3@huawei.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/core: Init subsys if compiled to vmlinuz-coreDmitry Monakhov2017-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once infiniband is compiled as a core component its subsystem must be enabled before device initialization. Otherwise there is a NULL pointer dereference during mlx4_core init, calltrace: ->device_add if (dev->class) { deref dev->class->p =>NULLPTR #Config CONFIG_NET_DEVLINK=y CONFIG_MAY_USE_DEVLINK=y CONFIG_MLX4_EN=y Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * RDMA/cma: Make sure that PSN is not over max allowedMoni Shoua2017-12-011-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | This patch limits the initial value for PSN to 24 bits as spec requires. Signed-off-by: Moni Shoua <monis@mellanox.com> Signed-off-by: Mukesh Kacker <mukesh.kacker@oracle.com> Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * i40iw: Notify user of established connection after QP in RTSHenry Orosco2017-12-011-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | Established CM event is sent prior to modifying QP to RTS state. This can result in application closing the connection before the QP is actually in RTS state. Move sending of established CM event to after modify QP to RTS. Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Henry Orosco <henry.orosco@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * i40iw: Move MPA request event for loopback after connectTatyana Nikolova2017-12-011-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For loopback, a MPA request event is generated when cm_node is initialized, which allows applications to act on the connect request before i40iw_connect() has completed. In some cases, the reject flow executes in parallel with the connect flow and doesn't delete an APBVT entry, because the apbvt_set variable is still not set by the connect flow. Move the MPA request event to the end of i40iw_connect() to notify application for a connect request, after connect has completed. Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Henry Orosco <henry.orosco@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * i40iw: Correct ARP index maskMustafa Ismail2017-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The ARP table entry indexes are aliased to 12bits instead of the intended 16bits when uploaded to the QP Context. This will present an issue when the number of connections exceeds 4096 as ARP entries are reused. Fix this by adjusting the mask to account for the full 16bits. Fixes: 4e9042e647ff ("i40iw: add hw and utils files") Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * i40iw: Do not free sqbuf when event is I40IW_TIMER_TYPE_CLOSEMustafa Ismail2017-12-011-3/+3
| | | | | | | | | | | | | | | | | | | | When the event type is I40IW_TIMER_TYPE_CLOSE, there is no sqbuf and it should not be freed as one in i40iw_schedule_cm_timer(). Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * i40iw: Allocate a sdbuf per CQP WQEChien Tin Tung2017-12-012-14/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently there is only one sdbuf per Control QP (CQP) for programming Segment Descriptor (SD). If multiple SD work requests are posted simultaneously, the sdbuf is reused by all WQEs and new WQEs can corrupt previous WQEs sdbuf leading to incorrect SD programming. Fix this by allocating one sdbuf per CQP SQ WQE. When an SD command is posted, it will use the corresponding sdbuf for the WQE. Fixes: 86dbcd0f12e9 ("i40iw: add file to handle cqp calls") Signed-off-by: Chien Tin Tung <chien.tin.tung@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB: INFINIBAND should depend on HAS_DMAGeert Uytterhoeven2017-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If NO_DMA=y: ERROR: "bad_dma_ops" [net/sunrpc/xprtrdma/rpcrdma.ko] undefined! ERROR: "bad_dma_ops" [net/smc/smc.ko] undefined! ERROR: "bad_dma_ops" [net/rds/rds_rdma.ko] undefined! ERROR: "bad_dma_ops" [net/9p/9pnet_rdma.ko] undefined! ERROR: "bad_dma_ops" [drivers/nvme/target/nvmet-rdma.ko] undefined! ERROR: "bad_dma_ops" [drivers/nvme/host/nvme-rdma.ko] undefined! ERROR: "bad_dma_ops" [drivers/infiniband/ulp/srpt/ib_srpt.ko] undefined! ERROR: "bad_dma_ops" [drivers/infiniband/ulp/srp/ib_srp.ko] undefined! ERROR: "bad_dma_ops" [drivers/infiniband/ulp/isert/ib_isert.ko] undefined! ERROR: "bad_dma_ops" [drivers/infiniband/ulp/iser/ib_iser.ko] undefined! ERROR: "bad_dma_ops" [drivers/infiniband/ulp/ipoib/ib_ipoib.ko] undefined! ERROR: "bad_dma_ops" [drivers/infiniband/core/ib_core.ko] undefined! Before, this was handled implicitly by the dependency on PCI. Add an explicit dependency on HAS_DMA to fix this. Fixes: 931bc0d91639f8fb ("IB: Move PCI dependency from root KConfig to HW's KConfigs") Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
| * IB/hfi1: Initialize bth1 in 16B rc ack builderDennis Dalessandro2017-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | It is possible the bth1 variable could be used uninitialized so going ahead and giving it a default value. Otherwise we leak stack memory to the network. Fixes: 5b6cabb0db77 ("IB/hfi1: Add 16B RC/UC support") Reviewed-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | infiniband: drop unknown function from core_priv.hRandy Dunlap2017-12-281-7/+0Star
| | | | | | | | | | | | | | | | | | Delete ibnl_chk_listeners() and its kernel-doc comments from the core_priv.h header file. There is no such function. Fixes: 233c1955835b ("RDMA/netlink: Reduce exposure of RDMA netlink functions") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | IB/core: Make sure that PSN does not overflowMajd Dibbiny2017-12-271-0/+16
| | | | | | | | | | | | | | | | | | | | | | The rq/sq->psn is 24 bits as defined in the IB spec, therefore we mask out the 8 most significant bits to avoid overflow in modify_qp. Signed-off-by: Majd Dibbiny <majd@mellanox.com> Signed-off-by: Daniel Jurgens <danielj@mellanox.com> Reviewed-by: Parav Pandit <parav@mellanox.com> Signed-off-by: Leon Romanovsky <leon@kernel.org> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | IB/hfi1: Change slid arg in ingress_pkey_table_fail to 32bitDon Hiatt2017-12-222-6/+2Star
| | | | | | | | | | | | | | | | | | | | | | Change the slid arg to ingress_pkey_table_fail() to a full 32Bits and do not convert to 16Bits in caller. This is so we can keep everything 32bit in the kernel and only change to 16bit at the uapi boundary. Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | IB/core: Use rdma_cap_opa_mad to check for OPADon Hiatt2017-12-221-2/+1Star
| | | | | | | | | | | | | | | | Use rdma_cap_opa_mad() to check for OPA to promote code reuse. Signed-off-by: Don Hiatt <don.hiatt@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Fix the connection ORD value for loopbackTatyana Nikolova2017-12-221-12/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The accepting QP ORD value should be adjusted not to exceed the peer QP IRD value (RFC 6581). This is skipped for loopback. After the ORD is validated by i40iw_record_ird_ord(), adjust the ORD value of the loopback accepting QP to prevent overrunning the IRD space of the peer QP. Also move the ORD accounting for 0-byte RDMA read to i40iw_record_ird_ord(). Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Validate correct IRD/ORD connection parametersTatyana Nikolova2017-12-221-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Casting to u16 before validating IRD/ORD connection parameters could cause recording wrong IRD/ORD values in the cm_node. Validate the IRD/ORD parameters as they are passed by the application before recording them. Fixes: f27b4746f378 ("i40iw: add connection management code") Signed-off-by: Tatyana Nikolova <tatyana.e.nikolova@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Ignore LLP_DOUBT_REACHABILITY AEShiraz Saleem2017-12-221-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The LLP_DOUBT_REACHABILITY Asynchronous Event (AE) is an early warning of a connection issue. It is followed by LLP_TOO_MANY_RETRIES AE, if the retransmit threshold is reached and recovery is not possible for the connection. Currently we terminate the connection on receiving the LLP_DOUBT_REACHABILITY AE. Ignore this AE and terminate the connection only on LLP_TOO_MANY_RETRIES AE. This improves the user experience on cable disconnect/reconnect scenario while running iWARP traffic. On cable disconnect, the QP traffic is paused and the user has a larger and more reasonable timeout within which if the cable is reconnected, traffic can continue. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Fix sequence number for the first partial FPDUShiraz Saleem2017-12-222-1/+2
| | | | | | | | | | | | | | | | | | | | Partial FPDU processing is broken as the sequence number for the first partial FPDU is wrong due to incorrect Q2 buffer offset. The offset should be 64 rather than 16. Fixes: 786c6adb3a94 ("i40iw: add puda code") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Selectively teardown QPs on IP addr change eventShiraz Saleem2017-12-223-10/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | On IP address change event, all connected QPs are torn down irrespective of whether IP address is involved in a connection. Only teardown connections those source or destination address matches the netdev interface IP address being changed, and if they are on the same VLAN as the netdev. Fixes: e5e74b61b165 ("i40iw: Add IP addr handling on netdev events") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Add notifier for network device eventsShiraz Saleem2017-12-223-3/+56
| | | | | | | | | | | | | | | | Register a netdevice notifier for netdev UP/DOWN notification events and report the appropriate ib event. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Correct Q1/XF object count equationShiraz Saleem2017-12-221-2/+4
| | | | | | | | | | | | | | | | | | | | Lower Inbound RDMA Read Queue (Q1) object count by a factor of 2 as it is incorrectly doubled. Also, round up Q1 and Transmit FIFO (XF) object count to power of 2 to satisfy hardware requirement. Fixes: 86dbcd0f12e9 ("i40iw: add file to handle cqp calls") Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Use utility function roundup_pow_of_two()Shiraz Saleem2017-12-223-33/+7Star
| | | | | | | | | | | | | | | | Consolidate all power of 2 round calculations to use kernel utility function roundup_pow_of_two(). Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | i40iw: Set MAX_IRD_SIZE to 64Shiraz Saleem2017-12-221-1/+1
| | | | | | | | | | | | | | Increase I40IW_MAX_IRD_SIZE to 64 which is the device limit. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | rdma: Update maintainer contact for Intel RDMA driversDennis Dalessandro2017-12-221-1/+3
| | | | | | | | | | | | | | | | | | Ensure both Mike and I are listed as maintainer contacts for Intel's qib, hfi1, and rdmavt drivers. Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | IB/SA: Check dlid before SA agent queries for ClassPortInfoVenkata Sandeep Dhanalakota2017-12-222-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | SA queries SM for class port info when there is a LID_CHANGE event. When a base lid is configured before fm is started ie when smlid is not yet assigned, SA handles the LID_CHANGE event and tries query SM with lid 0. This will cause an hang. [ 1106.958820] INFO: task kworker/2:0:23 blocked for more than 120 seconds. [ 1106.965082] Tainted: G O 4.12.0+ #1 [ 1106.969602] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 1106.977227] kworker/2:0 D 0 23 2 0x00000000 [ 1106.977250] Workqueue: infiniband update_ib_cpi [ib_core] [ 1106.977261] Call Trace: [ 1106.977273] __schedule+0x28e/0x860 [ 1106.977285] schedule+0x36/0x80 [ 1106.977298] schedule_timeout+0x1a3/0x2e0 [ 1106.977310] ? radix_tree_iter_tag_clear+0x1b/0x20 [ 1106.977322] ? idr_alloc+0x64/0x90 [ 1106.977334] wait_for_completion+0xe3/0x140 [ 1106.977347] ? wake_up_q+0x80/0x80 [ 1106.977369] update_ib_cpi+0x163/0x210 [ib_core] [ 1106.977381] process_one_work+0x147/0x370 [ 1106.977394] worker_thread+0x4a/0x390 [ 1106.977406] kthread+0x109/0x140 [ 1106.977418] ? process_one_work+0x370/0x370 [ 1106.977430] ? kthread_park+0x60/0x60 [ 1106.977443] ret_from_fork+0x22/0x30 Always ensure a proper smlid is assigned before querying SM for cpi. Fixes: ee1c60b1bff ("IB/SA: Modify SA to implicitly cache Class Port info") Reviewed-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Venkata Sandeep Dhanalakota <venkata.s.dhanalakota@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
* | nes: Change accelerated flag to boolShiraz Saleem2017-12-222-2/+2
| | | | | | | | | | | | | | | | | | The accelerated flag only utilizes two values: 0 and 1. Modify accelerated flag in struct nes_cm_node to bool. Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>