summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
* dcache: convert dentry_stat.nr_unused to per-cpu countersDave Chinner2013-09-111-3/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before we split up the dcache_lru_lock, the unused dentry counter needs to be made independent of the global dcache_lru_lock. Convert it to per-cpu counters to do this. Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Glauber Costa <glommer@openvz.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Mel Gorman <mgorman@suse.de> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* super: fix calculation of shrinkable objects for small numbersGlauber Costa2013-09-117-15/+14Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sysctl knob sysctl_vfs_cache_pressure is used to determine which percentage of the shrinkable objects in our cache we should actively try to shrink. It works great in situations in which we have many objects (at least more than 100), because the aproximation errors will be negligible. But if this is not the case, specially when total_objects < 100, we may end up concluding that we have no objects at all (total / 100 = 0, if total < 100). This is certainly not the biggest killer in the world, but may matter in very low kernel memory situations. Signed-off-by: Glauber Costa <glommer@openvz.org> Reviewed-by: Carlos Maiolino <cmaiolino@redhat.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Acked-by: Mel Gorman <mgorman@suse.de> Cc: Dave Chinner <david@fromorbit.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: bump inode and dentry counters to longGlauber Costa2013-09-113-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This series reworks our current object cache shrinking infrastructure in two main ways: * Noticing that a lot of users copy and paste their own version of LRU lists for objects, we put some effort in providing a generic version. It is modeled after the filesystem users: dentries, inodes, and xfs (for various tasks), but we expect that other users could benefit in the near future with little or no modification. Let us know if you have any issues. * The underlying list_lru being proposed automatically and transparently keeps the elements in per-node lists, and is able to manipulate the node lists individually. Given this infrastructure, we are able to modify the up-to-now hammer called shrink_slab to proceed with node-reclaim instead of always searching memory from all over like it has been doing. Per-node lru lists are also expected to lead to less contention in the lru locks on multi-node scans, since we are now no longer fighting for a global lock. The locks usually disappear from the profilers with this change. Although we have no official benchmarks for this version - be our guest to independently evaluate this - earlier versions of this series were performance tested (details at http://permalink.gmane.org/gmane.linux.kernel.mm/100537) yielding no visible performance regressions while yielding a better qualitative behavior in NUMA machines. With this infrastructure in place, we can use the list_lru entry point to provide memcg isolation and per-memcg targeted reclaim. Historically, those two pieces of work have been posted together. This version presents only the infrastructure work, deferring the memcg work for a later time, so we can focus on getting this part tested. You can see more about the history of such work at http://lwn.net/Articles/552769/ Dave Chinner (18): dcache: convert dentry_stat.nr_unused to per-cpu counters dentry: move to per-sb LRU locks dcache: remove dentries from LRU before putting on dispose list mm: new shrinker API shrinker: convert superblock shrinkers to new API list: add a new LRU list type inode: convert inode lru list to generic lru list code. dcache: convert to use new lru list infrastructure list_lru: per-node list infrastructure shrinker: add node awareness fs: convert inode and dentry shrinking to be node aware xfs: convert buftarg LRU to generic code xfs: rework buffer dispose list tracking xfs: convert dquot cache lru to list_lru fs: convert fs shrinkers to new scan/count API drivers: convert shrinkers to new count/scan API shrinker: convert remaining shrinkers to count/scan API shrinker: Kill old ->shrink API. Glauber Costa (7): fs: bump inode and dentry counters to long super: fix calculation of shrinkable objects for small numbers list_lru: per-node API vmscan: per-node deferred work i915: bail out earlier when shrinker cannot acquire mutex hugepage: convert huge zero page shrinker to new shrinker API list_lru: dynamically adjust node arrays This patch: There are situations in very large machines in which we can have a large quantity of dirty inodes, unused dentries, etc. This is particularly true when umounting a filesystem, where eventually since every live object will eventually be discarded. Dave Chinner reported a problem with this while experimenting with the shrinker revamp patchset. So we believe it is time for a change. This patch just moves int to longs. Machines where it matters should have a big long anyway. Signed-off-by: Glauber Costa <glommer@openvz.org> Cc: Dave Chinner <dchinner@redhat.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: Dave Chinner <dchinner@redhat.com> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Add missing unlocks to error paths of mountpoint_last.Dave Jones2013-09-111-1/+4
| | | | | Signed-off-by: Dave Jones <davej@fedoraproject.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* ... and fold the renamed __vfs_follow_link() into its only callerAl Viro2013-09-111-24/+14Star
| | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* fs: remove vfs_follow_linkChristoph Hellwig2013-09-111-8/+2Star
| | | | | | | | | | | | For a long time no filesystem has been using vfs_follow_link, and as seen by recent filesystem submissions any new use is accidental as well. Remove vfs_follow_link, document the replacement in Documentation/filesystems/porting and also rename __vfs_follow_link to match its only caller better. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Merge branch 'for-linus' of ↵Linus Torvalds2013-09-108-167/+221
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull vfs pile 3 (of many) from Al Viro: "Waiman's conversion of d_path() and bits related to it, kern_path_mountpoint(), several cleanups and fixes (exportfs one is -stable fodder, IMO). There definitely will be more... ;-/" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: split read_seqretry_or_unlock(), convert d_walk() to resulting primitives dcache: Translating dentry into pathname without taking rename_lock autofs4 - fix device ioctl mount lookup introduce kern_path_mountpoint() rename user_path_umountat() to user_path_mountpoint_at() take unlazy_walk() into umount_lookup_last() Kill indirect include of file.h from eventfd.h, use fdget() in cgroup.c prune_super(): sb->s_op is never NULL exportfs: don't assume that ->iterate() won't feed us too long entries afs: get rid of redundant ->d_name.len checks
| * split read_seqretry_or_unlock(), convert d_walk() to resulting primitivesAl Viro2013-09-091-33/+31Star
| | | | | | | | | | | | | | | | | | | | Separate "check if we need to retry" from "unlock if we are done and had seq_writelock"; that allows to use these guys in d_walk(), where we need to recheck every time we ascend back to parent, but do *not* want to unlock until the very end. Lift rcu_read_lock/rcu_read_unlock out into callers. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * dcache: Translating dentry into pathname without taking rename_lockWaiman Long2013-09-091-63/+133
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When running the AIM7's short workload, Linus' lockref patch eliminated most of the spinlock contention. However, there were still some left: 8.46% reaim [kernel.kallsyms] [k] _raw_spin_lock |--42.21%-- d_path | proc_pid_readlink | SyS_readlinkat | SyS_readlink | system_call | __GI___readlink | |--40.97%-- sys_getcwd | system_call | __getcwd The big one here is the rename_lock (seqlock) contention in d_path() and the getcwd system call. This patch will eliminate the need to take the rename_lock while translating dentries into the full pathnames. The need to take the rename_lock is to make sure that no rename operation can be ongoing while the translation is in progress. However, only one thread can take the rename_lock thus blocking all the other threads that need it even though the translation process won't make any change to the dentries. This patch will replace the writer's write_seqlock/write_sequnlock sequence of the rename_lock of the callers of the prepend_path() and __dentry_path() functions with the reader's read_seqbegin/read_seqretry sequence within these 2 functions. As a result, the code will have to retry if one or more rename operations had been performed. In addition, RCU read lock will be taken during the translation process to make sure that no dentries will go away. To prevent live-lock from happening, the code will switch back to take the rename_lock if read_seqretry() fails for three times. To further reduce spinlock contention, this patch does not take the dentry's d_lock when copying the filename from the dentries. Instead, it treats the name pointer and length as unreliable and just copy the string byte-by-byte over until it hits a null byte or the end of string as specified by the length. This should avoid stepping into invalid memory address. The error cases are left to be handled by the sequence number check. The following code re-factoring are also made: 1. Move prepend('/') into prepend_name() to remove one conditional check. 2. Move the global root check in prepend_path() back to the top of the while loop. With this patch, the _raw_spin_lock will now account for only 1.2% of the total CPU cycles for the short workload. This patch also has the effect of reducing the effect of running perf on its profile since the perf command itself can be a heavy user of the d_path() function depending on the complexity of the workload. When taking the perf profile of the high-systime workload, the amount of spinlock contention contributed by running perf without this patch was about 16%. With this patch, the spinlock contention caused by the running of perf will go away and we will have a more accurate perf profile. Signed-off-by: Waiman Long <Waiman.Long@hp.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * autofs4 - fix device ioctl mount lookupIan Kent2013-09-091-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When reconnecting to automounts at startup an autofs ioctl is used to find the device and inode of existing mounts so they can be used to open a file descriptor of possibly covered mounts. At this time the the caller might not yet "own" the mount so it can trigger calling ->d_automount(). This causes automount to hang when trying to reconnect to direct or offset mount types. Consequently kern_path() can't be used but kern_path_mountpoint() can be. Signed-off-by: Ian Kent <raven@themaw.net> Cc: Jeff Layton <jlayton@redhat.com> Cc: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * introduce kern_path_mountpoint()Al Viro2013-09-091-11/+24
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * rename user_path_umountat() to user_path_mountpoint_at()Al Viro2013-09-093-13/+15
| | | | | | | | | | | | | | ... and move the extern from linux/namei.h to fs/internal.h, along with that of vfs_path_lookup(). Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * take unlazy_walk() into umount_lookup_last()Al Viro2013-09-091-33/+27Star
| | | | | | | | | | | | ... and massage it a bit to reduce nesting Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * prune_super(): sb->s_op is never NULLAl Viro2013-09-081-1/+1
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * exportfs: don't assume that ->iterate() won't feed us too long entriesAl Viro2013-09-081-1/+1
| | | | | | | | | | | | | | | | On some filesystems it's impossible even with fs corruption, but we'd better not rely on that, what with memcpy() into on-stack array we are doing there. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * afs: get rid of redundant ->d_name.len checksAl Viro2013-09-081-24/+0Star
| | | | | | | | | | | | | | | | | | No dentry can get to directory modification methods without having passed either ->lookup() or ->atomic_open(); if name is rejected by those two (or by ->d_hash()) with an error, it won't be seen by anything else. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | vfs: make sure we don't have a stale root path if unlazy_walk() failsLinus Torvalds2013-09-101-1/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When I moved the RCU walk termination into unlazy_walk(), I didn't copy quite all of it: for the successful RCU termination we properly add the necessary reference counts to our temporary copy of the root path, but for the failure case we need to make sure that any temporary root path information is cleared out (since it does _not_ have the proper reference counts from the RCU lookup). We could clean up this mess by just always dropping the temporary root information, but Al points out that that would mean that a single lookup through symlinks could see multiple different root entries if it races with another thread doing chroot. Not that I think we should really care (we had that before too, back before we had a copy of the root path in the nameidata). Al says he has a cunning plan. In the meantime, this is the minimal fix for the problem, even if it's not all that pretty. Reported-by: Mace Moneta <moneta.mace@gmail.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | Merge tag 'xfs-for-linus-v3.12-rc1' of git://oss.sgi.com/xfs/xfsLinus Torvalds2013-09-09111-11894/+13556
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull xfs updates from Ben Myers: "For 3.12-rc1 there are a number of bugfixes in addition to work to ease usage of shared code between libxfs and the kernel, the rest of the work to enable project and group quotas to be used simultaneously, performance optimisations in the log and the CIL, directory entry file type support, fixes for log space reservations, some spelling/grammar cleanups, and the addition of user namespace support. - introduce readahead to log recovery - add directory entry file type support - fix a number of spelling errors in comments - introduce new Q_XGETQSTATV quotactl for project quotas - add USER_NS support - log space reservation rework - CIL optimisations - kernel/userspace libxfs rework" * tag 'xfs-for-linus-v3.12-rc1' of git://oss.sgi.com/xfs/xfs: (112 commits) xfs: XFS_MOUNT_QUOTA_ALL needed by userspace xfs: dtype changed xfs_dir2_sfe_put_ino to xfs_dir3_sfe_put_ino Fix wrong flag ASSERT in xfs_attr_shortform_getvalue xfs: finish removing IOP_* macros. xfs: inode log reservations are too small xfs: check correct status variable for xfs_inobt_get_rec() call xfs: inode buffers may not be valid during recovery readahead xfs: check LSN ordering for v5 superblocks during recovery xfs: btree block LSN escaping to disk uninitialised XFS: Assertion failed: first <= last && last < BBTOB(bp->b_length), file: fs/xfs/xfs_trans_buf.c, line: 568 xfs: fix bad dquot buffer size in log recovery readahead xfs: don't account buffer cancellation during log recovery readahead xfs: check for underflow in xfs_iformat_fork() xfs: xfs_dir3_sfe_put_ino can be static xfs: introduce object readahead to log recovery xfs: Simplify xfs_ail_min() with list_first_entry_or_null() xfs: Register hotcpu notifier after initialization xfs: add xfs sb v4 support for dirent filetype field xfs: Add write support for dirent filetype field xfs: Add read-only support for dirent filetype field ...
| * | xfs: XFS_MOUNT_QUOTA_ALL needed by userspaceDave Chinner2013-09-032-7/+6Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | So move it to a header file shared with userspace. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: dtype changed xfs_dir2_sfe_put_ino to xfs_dir3_sfe_put_inoDave Chinner2013-09-032-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So fix up the export in xfs_dir2.h that is needed by userspace. <sigh> Now xfs_dir3_sfe_put_ino has been made static. Revert 98f7462 ("xfs: xfs_dir3_sfe_put_ino can be static") to being non static so that the code shared with userspace is identical again. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | Fix wrong flag ASSERT in xfs_attr_shortform_getvalueEric Sandeen2013-08-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This ASSERT is testing an if_flags flag value against a di_aformat enum value. di_aformat is never assigned XFS_IFINLINE. This happens to work for now, because XFS_IFINLINE has the same value as XFS_DINODE_FMT_LOCAL, and that's tested just before we call this function. However, I think the intention is to assert that we have read in the data, i.e. XFS_IFINLINE on if_flags, before we use if_data. This is done in other places through the code as well. Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: finish removing IOP_* macros.Dave Chinner2013-08-304-20/+17Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In optimising the CIL operations, some of the IOP_* macros for calling log item operations were removed. Remove the rest of them as Christoph requested. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Geoffrey Wehrman <gwehrman@sgi.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: inode log reservations are too smallDave Chinner2013-08-301-19/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We've been seeing occasional problems with log space leaks and transaction underruns such as this for some time: XFS (dm-0): xlog_write: reservation summary: trans type = FSYNC_TS (36) unit res = 2740 bytes current res = -4 bytes total reg = 0 bytes (o/flow = 0 bytes) ophdrs = 0 (ophdr space = 0 bytes) ophdr + reg = 0 bytes num regions = 0 Turns out that xfstests generic/311 is reliably reproducing this problem with the test it runs at sequence 16 of it execution. It is a 100% reliable reproducer with the mkfs configuration of "-b size=1024 -m crc=1" on a 10GB scratch device. The problem? Inode forks in btree format are logged in memory format, not disk format (i.e. bmbt format, not bmdr format). That means there is a btree block header being logged, when such a structure is never written to the inode fork in bmdr format. The bmdr header in the inode is only 4 bytes, while the bmbt header is 24 bytes for v4 filesystems and 72 bytes for v5 filesystems. We currently reserve the inode size plus the rounded up overhead of a logging a buffer, which is 128 bytes. That means the reservation for a 512 byte inode is 640 bytes. What we can actually log is: inode core, data and attr fork = 512 bytes inode log format + log op header = 56 + 12 = 68 bytes data fork bmbt hdr = 24/72 bytes attr fork bmbt hdr = 24/72 bytes So, for a v2 inodes we can log at least 628 bytes, but if we split that inode over the end of the log across log buffers, we need to also another log op header, which takes us to 640 bytes. If there's another reservation taken out of this that I haven't taken into account (perhaps multiple iclog splits?) or I haven't corectly calculated the bmbt format space used (entirely possible), then we will overun it. For v3 inodes the maximum is actually 724 bytes, and even a single maximally sized btree format fork can blow it (652 bytes). And that's exactly what is happening with the FSYNC_TS transaction in the above output - it's consumed 644 bytes of space after the CIL context took the space reserved for it (2100 bytes). This problem has always been present in the XFS code - the btree format inode forks have always been logged in this manner. Hence there has always been the possibility of an overrun with such a transaction. The CRC code has just exposed it frequently enough to be able to debug and understand the root cause.... So, let's fix all the inode log space reservations. [ I'm so glad we spent the effort to clean up the transaction reservation code. This is an easy fix now. ] Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: check correct status variable for xfs_inobt_get_rec() callBrian Foster2013-08-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The call to xfs_inobt_get_rec() in xfs_dialloc_ag() passes 'j' as the output status variable. The immediately following XFS_WANT_CORRUPTED_GOTO() checks the value of 'i,' which is from the previous lookup call and has already been checked. Fix the corruption check to use 'j.' Signed-off-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: inode buffers may not be valid during recovery readaheadDave Chinner2013-08-303-4/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | CRC enabled filesystems fail log recovery with 100% reliability on xfstests xfs/085 with the following failure: XFS (vdb): Mounting Filesystem XFS (vdb): Starting recovery (logdev: internal) XFS (vdb): Corruption detected. Unmount and run xfs_repair XFS (vdb): bad inode magic/vsn daddr 144 #0 (magic=0) XFS: Assertion failed: 0, file: fs/xfs/xfs_inode_buf.c, line: 95 The problem is that the inode buffer has not been recovered before the readahead on the inode buffer is issued. The checkpoint being recovered actually allocates the inode chunk we are doing readahead from, so what comes from disk during readahead is essentially random and the verifier barfs on it. This inode buffer readahead problem affects non-crc filesystems, too, but xfstests does not trigger it at all on such configurations.... Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: check LSN ordering for v5 superblocks during recoveryDave Chinner2013-08-301-13/+156
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Log recovery has some strict ordering requirements which unordered or reordered metadata writeback can defeat. This can occur when an item is logged in a transaction, written back to disk, and then logged in a new transaction before the tail of the log is moved past the original modification. The result of this is that when we read an object off disk for recovery purposes, the buffer that we read may not contain the object type that recovery is expecting and hence at the end of the checkpoint being recovered we have an invalid object in memory. This isn't usually a problem, as recovery will then replay all the other checkpoints and that brings the object back to a valid and correct state, but the issue is that while the object is in the invalid state it can be flushed to disk. This results in the object verifier failing and triggering a corruption shutdown of log recover. This is correct behaviour for the verifiers - the problem is that we are not detecting that the object we've read off disk is newer than the transaction we are replaying. All metadata in v5 filesystems has the LSN of it's last modification stamped in it. This enabled log recover to read that field and determine the age of the object on disk correctly. If the LSN of the object on disk is older than the transaction being replayed, then we replay the modification. If the LSN of the object matches or is more recent than the transaction's LSN, then we should avoid overwriting the object as that is what leads to the transient corrupt state. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: btree block LSN escaping to disk uninitialisedDave Chinner2013-08-301-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When testing LSN ordering code for v5 superblocks, it was discovered that the the LSN embedded in the generic btree blocks was occasionally uninitialised. These values didn't get written to disk by metadata writeback - they got written by previous transactions in log recovery. The issue is here that the when the block is first allocated and initialised, the LSN field was not initialised - it gets overwritten before IO is issued on the buffer - but the value that is logged by transactions that modify the header before it is written to disk (and initialised) contain garbage. Hence the first recovery of the buffer will stamp garbage into the LSN field, and that can cause subsequent transactions to not replay correctly. The fix is simply to initialise the bb_lsn field to zero when we initialise the block for the first time. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | XFS: Assertion failed: first <= last && last < BBTOB(bp->b_length), file: ↵Dave Chinner2013-08-302-8/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | fs/xfs/xfs_trans_buf.c, line: 568 The calculation doesn't take into account the size of the dir v3 header, so overestimates the hash entries in a node. This causes directory buffer overruns when splitting and merging nodes. Signed-off-by: Dave Chinner <dchinner@redhat.com> Tested-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix bad dquot buffer size in log recovery readaheadDave Chinner2013-08-291-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | xfstests xfs/087 fails 100% reliably with this assert: XFS (vdb): Mounting Filesystem XFS (vdb): Starting recovery (logdev: internal) XFS: Assertion failed: bp->b_flags & XBF_STALE, file: fs/xfs/xfs_buf.c, line: 548 while trying to read a dquot buffer in xlog_recover_dquot_ra_pass2(). The issue is that the buffer length to read that is passed to xfs_buf_readahead is in units of filesystem blocks, not disk blocks. (i.e. FSB, not daddr). Fix it but putting the correct conversion in place. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: don't account buffer cancellation during log recovery readaheadDave Chinner2013-08-291-26/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When doing readhaead in log recovery, we check to see if buffers are cancelled before doing readahead. If we find a cancelled buffer, however, we always decrement the reference count we have on it, and that means that readahead is causing a double decrement of the cancelled buffer reference count. This results in log recovery *replaying cancelled buffers* as the actual recovery pass does not find the cancelled buffer entry in the commit phase of the second pass across a transaction. On debug kernels, this results in an ASSERT failure like so: XFS: Assertion failed: !(flags & XFS_BLF_CANCEL), file: fs/xfs/xfs_log_recover.c, line: 1815 xfstests generic/311 reproduces this ASSERT failure with 100% reproducability. Fix it by making readahead only peek at the buffer cancelled state rather than the full accounting that xlog_check_buffer_cancelled() does. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: check for underflow in xfs_iformat_fork()Dan Carpenter2013-08-261-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The "di_size" variable comes from the disk and it's a signed 64 bit. We check the upper limit but we should check for negative numbers as well. Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: xfs_dir3_sfe_put_ino can be staticFengguang Wu2013-08-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | TO: Dave Chinner <david@fromorbit.com> CC: Ben Myers <bpm@sgi.com> CC: linux-kernel@vger.kernel.org Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: introduce object readahead to log recoveryZhi Yong Wu2013-08-231-7/+152
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It can take a long time to run log recovery operation because it is single threaded and is bound by read latency. We can find that it took most of the time to wait for the read IO to occur, so if one object readahead is introduced to log recovery, it will obviously reduce the log recovery time. Log recovery time stat: w/o this patch w/ this patch real: 0m15.023s 0m7.802s user: 0m0.001s 0m0.001s sys: 0m0.246s 0m0.107s Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: Simplify xfs_ail_min() with list_first_entry_or_null()Jie Liu2013-08-232-14/+12Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | At xfs_ail_min(), we do check if the AIL list is empty or not before returning the first item in it with list_empty() and list_first_entry(). This can be simplified a bit with a new list operation routine that is the list_first_entry_or_null() which has been introduced by: commit 6d7581e62f8be462440d7b22c6361f7c9fa4902b list: introduce list_first_entry_or_null v2: make xfs_ail_min() as a static inline function and move it to xfs_trans_priv.h as per Dave Chinner's comments. Signed-off-by: Jie Liu <jeff.liu@oracle.com> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: Register hotcpu notifier after initializationRichard Weinberger2013-08-221-6/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently the code initializizes mp->m_icsb_mutex and other things _after_ register_hotcpu_notifier(). As the notifier takes mp->m_icsb_mutex it can happen that it takes the lock before it's initialization. Signed-off-by: Richard Weinberger <richard@nod.at> Reviewed-by: Ben Myers <bpm@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: add xfs sb v4 support for dirent filetype fieldMark Tinguely2013-08-221-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add XFS superblock v4 support for the file type field in the directory entry feature. This support adds a feature bit for version 4 superblocks and leaves the original superblock 5 incompatibility bit. Signed-off-by: Mark Tinguely <tinguely@sgi.com> Reviewed-by: Geoffrey Wehrman <gwehrman@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: Add write support for dirent filetype fieldDave Chinner2013-08-227-9/+47
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support to propagate and add filetype values into the on-disk directs. This involves passing the filetype into the xfs_da_args structure along with the name and namelength for direct operations, and encoding it into the dirent at the same time we write the inode number into the dirent. With write support, add the feature flag to the XFS_SB_FEAT_INCOMPAT_ALL mask so we can now mount filesystems with this feature set. Performance of directory recursion is now much improved. Parallel walk of ~50 million directory entries across hundreds of directories improves significantly. Unpatched, no CRCs: Walking via ls -R real 3m19.886s user 6m36.960s sys 28m19.087s THis is doing roughly 500 getdents() calls per second, and 250,000 inode lookups per second to determine the inode type at roughly 17,000 read IOPS. CPU usage is 90% kernel space. With dtype support patched in and the fileset recreated with CRCs enabled: Walking via ls -R real 0m31.316s user 6m32.975s sys 0m21.111s This is doing roughly 3500 getdents() calls per second at 16,000 IOPS. There are no inode lookups at all. CPU usages is almost 100% userspace. This is a big win for recursive directory walks that only need to find file names and file types. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: Add read-only support for dirent filetype fieldDave Chinner2013-08-2214-123/+362
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for the file type field in directory entries so that readdir can return the type of the inode the dirent points to to userspace without first having to read the inode off disk. The encoding of the type field is a single byte that is added to the end of the directory entry name length. For all intents and purposes, it appends a "hidden" byte to the name field which contains the type information. As the directory entry is already of dynamic size, helpers are already required to access and decode the direct entry structures. Hence the relevent extraction and iteration helpers are updated to understand the hidden byte. Helpers for reading and writing the filetype field from the directory entries are also added. Only the read helpers are used by this patch. It also adds all the code necessary to read the type information out of the dirents on disk. Further we add the superblock feature bit and helpers to indicate that we understand the on-disk format change. This is not a compatible change - existing kernels cannot read the new format successfully - so an incompatible feature flag is added. We don't yet allow filesystems to mount with this flag yet - that will be added once write support is added. Finally, the code to take the type from the VFS, convert it to an XFS on-disk type and put it into the xfs_name structures passed around is added, but the directory code does not use this field yet. That will be in the next patch. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: Add support for the Q_XGETQSTATVChandra Seetharaman2013-08-213-0/+97
| | | | | | | | | | | | | | | | | | | | | | | | | | | For XFS, add support for Q_XGETQSTATV quotactl command. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Reviewed-by: Rich Johnston <rjohnston@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | quota: Add a new quotactl command Q_XGETQSTATVChandra Seetharaman2013-08-201-0/+29
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | XFS now supports three types of quotas (user, group and project). Current version of Q_XGETSTAT has support for only two types of quotas. In order to support three types of quotas, the interface, specifically struct fs_quota_stat, need to be expanded. Current version of fs_quota_stat does not allow expansion without breaking backward compatibility. So, a quotactl command and new fs_quota_stat structure need to be added. This patch adds a new command Q_XGETQSTATV to quotactl() which takes a new data structure fs_quota_statv. This new data structure provides support for future expansion and backward compatibility. Callers of the new quotactl command have to set the version of the data structure being passed, and kernel will fill as much data as requested. If the kernel does not support the user-space provided version, EINVAL will be returned. User-space can reduce the version number and call the same quotactl again. Signed-off-by: Chandra Seetharaman <sekharan@us.ibm.com> Reviewed-by: Jan Kara <jack@suse.cz> Reviewed-by: Rich Johnston <rjohnston@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com> [v2: Applied rjohnston's suggestions as per Chandra's request. -bpm]
| * | xfs: fix the comment of xfs_mountfs()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_sb_quiet_read_verify()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xlog_recover_do_dquot_buffer()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_log_unmount_write()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_ifree_cluster()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_ialloc_ag_select()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_extent_busy_update_extent()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_setsize_buftarg_early()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_bmap_punch_delalloc_range()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>
| * | xfs: fix the comment of xfs_bmap_last_before()Zhi Yong Wu2013-08-201-1/+1
| | | | | | | | | | | | | | | | | | | | | Signed-off-by: Zhi Yong Wu <wuzhy@linux.vnet.ibm.com> Reviewed-by: Mark Tinguely <tinguely@sgi.com> Signed-off-by: Ben Myers <bpm@sgi.com>