summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| | * | nvmet: convert from kmap to nvmet_copy_from_sglLogan Gunthorpe2017-04-211-7/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is safer as it doesn't rely on the data being stored in a single page in an sgl. It also aids our effort to start phasing out users of sg_page. See [1]. For this we kmalloc some memory, copy to it and free at the end. Note: we can't allocate this memory on the stack as the kbuild test robot reports some frame size overflows on i386. [1] https://lwn.net/Articles/720053/ Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Max Gurtovoy <maxg@mellanox.com> Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
| | * | nvme: improve performance for virtual NVMe devicesHelen Koike2017-04-212-2/+154
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This change provides a mechanism to reduce the number of MMIO doorbell writes for the NVMe driver. When running in a virtualized environment like QEMU, the cost of an MMIO is quite hefy here. The main idea for the patch is provide the device two memory location locations: 1) to store the doorbell values so they can be lookup without the doorbell MMIO write 2) to store an event index. I believe the doorbell value is obvious, the event index not so much. Similar to the virtio specification, the virtual device can tell the driver (guest OS) not to write MMIO unless you are writing past this value. FYI: doorbell values are written by the nvme driver (guest OS) and the event index is written by the virtual device (host OS). The patch implements a new admin command that will communicate where these two memory locations reside. If the command fails, the nvme driver will work as before without any optimizations. Contributions: Eric Northup <digitaleric@google.com> Frank Swiderski <fes@google.com> Ted Tso <tytso@mit.edu> Keith Busch <keith.busch@intel.com> Just to give an idea on the performance boost with the vendor extension: Running fio [1], a stock NVMe driver I get about 200K read IOPs with my vendor patch I get about 1000K read IOPs. This was running with a null device i.e. the backing device simply returned success on every read IO request. [1] Running on a 4 core machine: fio --time_based --name=benchmark --runtime=30 --filename=/dev/nvme0n1 --nrfiles=1 --ioengine=libaio --iodepth=32 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=randread --blocksize=4k --randrepeat=false Signed-off-by: Rob Nelson <rlnelson@google.com> [mlin: port for upstream] Signed-off-by: Ming Lin <mlin@kernel.org> [koike: updated for upstream] Signed-off-by: Helen Koike <helen.koike@collabora.co.uk> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <keith.busch@intel.com>
| | * | nvme/pci: Don't set reserved SQ create flagsKeith Busch2017-04-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The QPRIO field is only valid if weighted round robin arbitration is used, and this driver doesn't enable that controller configuration option. Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
| * | | mtip32xx: fix dereference of stack garbageJens Axboe2017-04-211-0/+1
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | We need to get the command payload from the request before we attempt to dereference it. Fixes: 4dda4735c581 ("mtip32xx: add a status field to struct mtip_cmd") Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-stat: kill blk_stat_rq_ddir()Jens Axboe2017-04-214-19/+7Star
| | | | | | | | | | | | | | | | | | | | | | | | No point in providing and exporting this helper. There's just one (real) user of it, just use rq_data_dir(). Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nbd: set the max segments to USHRT_MAXJosef Bacik2017-04-211-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I lack the basic understanding of what segments mean, so we were being limited to 512kib requests even with higher max_sectors sizes set. Setting the maximum number of segments to unlimited allows us to actually have arbitrarily large IO's go through NBD. Signed-off-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: Remove blk_mq_sched_move_to_dispatch()Bart Van Assche2017-04-212-19/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | commit c13660a08c8b ("blk-mq-sched: change ->dispatch_requests() to ->dispatch_request()") removed the last user of this function. Hence also remove the function itself. Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Cc: Omar Sandoval <osandov@fb.com> Cc: Hannes Reinecke <hare@suse.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: add might_sleep check to blk_mq_get_driver_tag()Jens Axboe2017-04-211-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If the caller passes in wait=true, it has to be able to block for a driver tag. We just had a bug where flush insertion would block on tag allocation, while we had preempt disabled. Ensure that we catch cases like that earlier next time. Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: Fix poll_stat for new size-based bucketing.Stephen Bates2017-04-213-9/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes an issue where the size of the poll_stat array in request_queue does not match the size expected by the new size based bucketing for IO completion polling. Fixes: 720b8ccc4500 ("blk-mq: Add a polling specific stats function") Signed-off-by: Stephen Bates <sbates@raithlin.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: fix schedule-while-atomic with scheduler attachedJens Axboe2017-04-211-5/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We must have dropped the ctx before we call blk_mq_sched_insert_request() with can_block=true, otherwise we risk that a flush request can block on insertion if we are currently out of tags. [ 47.667190] BUG: scheduling while atomic: jbd2/sda2-8/2089/0x00000002 [ 47.674493] Modules linked in: x86_pkg_temp_thermal btrfs xor zlib_deflate raid6_pq sr_mod cdre [ 47.690572] Preemption disabled at: [ 47.690584] [<ffffffff81326c7c>] blk_mq_sched_get_request+0x6c/0x280 [ 47.701764] CPU: 1 PID: 2089 Comm: jbd2/sda2-8 Not tainted 4.11.0-rc7+ #271 [ 47.709630] Hardware name: Dell Inc. PowerEdge T630/0NT78X, BIOS 2.3.4 11/09/2016 [ 47.718081] Call Trace: [ 47.720903] dump_stack+0x4f/0x73 [ 47.724694] ? blk_mq_sched_get_request+0x6c/0x280 [ 47.730137] __schedule_bug+0x6c/0xc0 [ 47.734314] __schedule+0x559/0x780 [ 47.738302] schedule+0x3b/0x90 [ 47.741899] io_schedule+0x11/0x40 [ 47.745788] blk_mq_get_tag+0x167/0x2a0 [ 47.750162] ? remove_wait_queue+0x70/0x70 [ 47.754901] blk_mq_get_driver_tag+0x92/0xf0 [ 47.759758] blk_mq_sched_insert_request+0x134/0x170 [ 47.765398] ? blk_account_io_start+0xd0/0x270 [ 47.770679] blk_mq_make_request+0x1b2/0x850 [ 47.775766] generic_make_request+0xf7/0x2d0 [ 47.780860] submit_bio+0x5f/0x120 [ 47.784979] ? submit_bio+0x5f/0x120 [ 47.789631] submit_bh_wbc.isra.46+0x10d/0x130 [ 47.794902] submit_bh+0xb/0x10 [ 47.798719] journal_submit_commit_record+0x190/0x210 [ 47.804686] ? _raw_spin_unlock+0x13/0x30 [ 47.809480] jbd2_journal_commit_transaction+0x180a/0x1d00 [ 47.815925] kjournald2+0xb6/0x250 [ 47.820022] ? kjournald2+0xb6/0x250 [ 47.824328] ? remove_wait_queue+0x70/0x70 [ 47.829223] kthread+0x10e/0x140 [ 47.833147] ? commit_timeout+0x10/0x10 [ 47.837742] ? kthread_create_on_node+0x40/0x40 [ 47.843122] ret_from_fork+0x29/0x40 Fixes: a4d907b6a33b ("blk-mq: streamline blk_mq_make_request") Reviewed-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: Add a polling specific stats functionStephen Bates2017-04-201-10/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rather than bucketing IO statisics based on direction only we also bucket based on the IO size. This leads to improved polling performance. Update the bucket callback function and use it in the polling latency estimation. Signed-off-by: Stephen Bates <sbates@raithlin.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-stat: convert blk-stat bucket callback to signedStephen Bates2017-04-203-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In order to allow for filtering of IO based on some other properties of the request than direction we allow the bucket function to return an int. If the bucket callback returns a negative do no count it in the stats accumulation. Signed-off-by: Stephen Bates <sbates@raithlin.com> Fixed up Kyber scheduler stat callback. Signed-off-by: Jens Axboe <axboe@fb.com>
| * | block: remove the errors field from struct requestChristoph Hellwig2017-04-207-49/+24Star
| | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blktrace: remove the unused block_rq_abort tracepointChristoph Hellwig2017-04-202-43/+10Star
| | | | | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | swim3: remove (commented out) printing of req->errorsChristoph Hellwig2017-04-201-2/+2
| | | | | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | ataflop: switch from req->errors to req->error_countChristoph Hellwig2017-04-201-5/+7
| | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | floppy: switch from req->errors to req->error_countChristoph Hellwig2017-04-201-2/+4
| | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | block: add a error_count field to struct requestChristoph Hellwig2017-04-201-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is for the legacy floppy and ataflop drivers that currently abuse ->errors for this purpose. It's stashed away in a union to not grow the struct size, the other fields are either used by modern drivers for different purposes or the I/O scheduler before queing the I/O to drivers. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: simplify __blk_mq_complete_requestChristoph Hellwig2017-04-201-17/+8Star
| | | | | | | | | | | | | | | | | | | | | | | | Merge blk_mq_ipi_complete_request and blk_mq_stat_add into their only caller. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | blk-mq: remove the error argument to blk_mq_complete_requestChristoph Hellwig2017-04-2012-26/+17Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all drivers that call blk_mq_complete_requests have a ->complete callback we can remove the direct call to blk_mq_end_request, as well as the error argument to blk_mq_complete_request. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | xen-blkfront: don't use req->errorsChristoph Hellwig2017-04-201-11/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xen-blkfron is the last users using rq->errros for passing back error to blk-mq, and I'd like to get rid of that. In the longer run the driver should be moving more of the completion processing into .complete, but this is the minimal change to move forward for now. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Roger Pau Monné <roger.pau@citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | mtip32xx: add a status field to struct mtip_cmdChristoph Hellwig2017-04-202-7/+10
| | | | | | | | | | | | | | | | | | | | | | | | Instead of using req->errors, which will go away. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nbd: don't use req->errorsChristoph Hellwig2017-04-201-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | Add a nbd-specific field instead. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Josef Bacik <jbacik@fb.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | dm mpath: don't check for req->errorsChristoph Hellwig2017-04-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | We'll get all proper errors reported through ->end_io and ->errors will go away soon. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | dm rq: don't pass irrelevant error code to blk_mq_complete_requestChristoph Hellwig2017-04-201-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | dm never uses rq->errors, so there is no need to pass an error argument to blk_mq_complete_request. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | null_blk: don't pass always-0 req->errors to blk_mq_complete_requestChristoph Hellwig2017-04-201-1/+1
| | | | | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | loop: zero-fill bio on the submitting cpuChristoph Hellwig2017-04-202-16/+15Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | In thruth I've just audited which blk-mq drivers don't currently have a complete callback, but I think this change is at least borderline useful. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | scsi: introduce a result field in struct scsi_requestChristoph Hellwig2017-04-2031-123/+123
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This passes on the scsi_cmnd result field to users of passthrough requests. Currently we abuse req->errors for this purpose, but that field will go away in its current form. Note that the old IDE code abuses the errors field in very creative ways and stores all kinds of different values in it. I didn't dare to touch this magic, so the abuses are brought forward 1:1. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | virtio_blk: don't use req->errorsChristoph Hellwig2017-04-201-7/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove passing req->errors (which at that point is always 0) to blk_mq_complete_request, and rely on the virtio status code for the serial number passthrough request. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | virtio: fix spelling of virtblk_scsi_request_doneChristoph Hellwig2017-04-201-3/+3
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nvme: make nvme_error_status privateChristoph Hellwig2017-04-203-6/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently it's used by the lighnvm passthrough ioctl, but we'd like to make it private in preparation of block layer specific error code. Lighnvm already returns the real NVMe status anyway, so I think we can just limit it to returning -EIO for any status set. This will need a careful audit from the lightnvm folks, though. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nvme: split nvme status from block req->errorsChristoph Hellwig2017-04-207-60/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want our own clearly defined error field for NVMe passthrough commands, and the request errors field is going away in its current form. Just store the status and result field in the nvme_request field from hardirq completion context (using a new helper) and then generate a Linux errno for the block layer only when we actually need it. Because we can't overload the status value with a negative error code for cancelled command we now have a flags filed in struct nvme_request that contains a bit for this condition. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nvme-fc: fix status code handling in nvme_fc_fcpio_doneChristoph Hellwig2017-04-201-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | nvme_complete_async_event expects the little endian status code including the phase bit, and a new completion handler I plan to introduce will do so as well. Change the status variable into the little endian format with the phase bit used in the NVMe CQE to fix / enable this. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | block: remove the blk_execute_rq return valueChristoph Hellwig2017-04-2016-28/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The function only returns -EIO if rq->errors is non-zero, which is not very useful and lets a large number of callers ignore the return value. Just let the callers figure out their error themselves. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | pd: don't check blk_execute_rq return value.Christoph Hellwig2017-04-201-5/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | The driver never sets req->errors, so blk_execute_rq will always return 0. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <Bart.VanAssche@sandisk.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | bdi: Drop 'parent' argument from bdi_register[_va]()Jan Kara2017-04-204-15/+11Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | Drop 'parent' argument of bdi_register() and bdi_register_va(). It is always NULL. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | block: Remove unused functionsJan Kara2017-04-202-55/+6Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all backing_dev_info structure are allocated separately, we can drop some unused functions. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | fs: Remove SB_I_DYNBDI flagJan Kara2017-04-206-11/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | Now that all bdi structures filesystems use are properly refcounted, we can remove the SB_I_DYNBDI flag. Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | ubifs: Convert to separately allocated bdiJan Kara2017-04-202-19/+9Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Richard Weinberger <richard@nod.at> CC: Artem Bityutskiy <dedekind1@gmail.com> CC: Adrian Hunter <adrian.hunter@intel.com> CC: linux-mtd@lists.infradead.org Acked-by: Richard Weinberger <richard@nod.at> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nfs: Convert to separately allocated bdiJan Kara2017-04-205-36/+28Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Anna Schumaker <anna.schumaker@netapp.com> CC: linux-nfs@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Acked-by: Trond Myklebust <trond.myklebust@primarydata.com> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | ncpfs: Convert to separately allocated bdiJan Kara2017-04-202-7/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Petr Vandrovec <petr@vandrovec.name> Acked-by: Petr Vandrovec <petr@vandrovec.name> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | nilfs2: Convert to properly refcounting bdiJan Kara2017-04-201-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similarly to set_bdev_super() NILFS2 just used block device reference to bdi. Convert it to properly getting bdi reference. The reference will get automatically dropped on superblock destruction. CC: linux-nilfs@vger.kernel.org Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Acked-by: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | gfs2: Convert to properly refcounting bdiJan Kara2017-04-201-6/+3Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Similarly to set_bdev_super() GFS2 just used block device reference to bdi. Convert it to properly getting bdi reference. The reference will get automatically dropped on superblock destruction. CC: Steven Whitehouse <swhiteho@redhat.com> CC: Bob Peterson <rpeterso@redhat.com> CC: cluster-devel@redhat.com Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | fuse: Get rid of bdi_initializedJan Kara2017-04-203-8/+2Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is not needed anymore since bdi is initialized whenever superblock exists. CC: Miklos Szeredi <miklos@szeredi.hu> CC: linux-fsdevel@vger.kernel.org Suggested-by: Miklos Szeredi <mszeredi@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | fuse: Convert to separately allocated bdiJan Kara2017-04-203-36/+17Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Miklos Szeredi <miklos@szeredi.hu> CC: linux-fsdevel@vger.kernel.org Acked-by: Miklos Szeredi <mszeredi@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | exofs: Convert to separately allocated bdiJan Kara2017-04-202-12/+6Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Boaz Harrosh <ooo@electrozaur.com> CC: Benny Halevy <bhalevy@primarydata.com> Acked-by: Boaz Harrosh <ooo@electrozaur.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | coda: Convert to separately allocated bdiJan Kara2017-04-202-8/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Jan Harkes <jaharkes@cs.cmu.edu> CC: coda@cs.cmu.edu CC: codalist@coda.cs.cmu.edu Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | mtd: Convert to dynamically allocated bdi infrastructureJan Kara2017-04-203-17/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MTD already allocates backing_dev_info dynamically. Convert it to use generic infrastructure for this including proper refcounting. We drop mtd->backing_dev_info as its only use was to pass mtd_bdi pointer from one file into another and if we wanted to keep that in a clean way, we'd have to make mtd hold and drop bdi reference as needed which seems pointless for passing one global pointer... CC: David Woodhouse <dwmw2@infradead.org> CC: Brian Norris <computersforpeace@gmail.com> CC: linux-mtd@lists.infradead.org Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | afs: Convert to separately allocated bdiJan Kara2017-04-203-10/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: David Howells <dhowells@redhat.com> CC: linux-afs@lists.infradead.org Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>
| * | ecryptfs: Convert to separately allocated bdiJan Kara2017-04-202-4/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allocate struct backing_dev_info separately instead of embedding it inside the superblock. This unifies handling of bdi among users. CC: Tyler Hicks <tyhicks@canonical.com> CC: ecryptfs@vger.kernel.org Acked-by: Tyler Hicks <tyhicks@canonical.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Jens Axboe <axboe@fb.com>