summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
| * vfs: do_last(): clean up retryMiklos Szeredi2012-07-141-15/+21
| | | | | | | | | | | | | | | | | | Move the lookup retry logic to the bottom of the function to make the normal case simpler to read. Reported-by: David Howells <dhowells@redhat.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: do_last(): clean up boolMiklos Szeredi2012-07-141-14/+14
| | | | | | | | | | | | | | | | Consistently use bool for boolean values in do_last(). Reported-by: David Howells <dhowells@redhat.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: do_last(): clean up labelsMiklos Szeredi2012-07-141-5/+5
| | | | | | | | | | | | Reported-by: David Howells <dhowells@redhat.com> Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: do_last(): clean up error handlingMiklos Szeredi2012-07-141-15/+8Star
| | | | | | | | | | Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: remove open intents from nameidataMiklos Szeredi2012-07-144-153/+48Star
| | | | | | | | | | | | | | | | | | All users of open intents have been converted to use ->atomic_{open,create}. This patch gets rid of nd->intent.open and related infrastructure. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * 9p: implement i_op->atomic_open()Miklos Szeredi2012-07-142-84/+137
| | | | | | | | | | | | | | | | | | Add an ->atomic_open implementation which replaces the atomic open+create operation implemented via ->create. No functionality is changed. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Eric Van Hensbergen <ericvh@gmail.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ceph: implement i_op->atomic_open()Miklos Szeredi2012-07-143-38/+56
| | | | | | | | | | | | | | | | | | Add an ->atomic_open implementation which replaces the atomic lookup+open+create operation implemented via ->lookup and ->create operations. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Sage Weil <sage@newdream.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ceph: remove unused arg from ceph_lookup_open()Miklos Szeredi2012-07-143-6/+4Star
| | | | | | | | | | | | | | | | What was the purpose of this? Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Sage Weil <sage@newdream.net> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * cifs: implement i_op->atomic_open()Miklos Szeredi2012-07-143-198/+247
| | | | | | | | | | | | | | | | | | Add an ->atomic_open implementation which replaces the atomic lookup+open+create operation implemented via ->lookup and ->create operations. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Steve French <sfrench@samba.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fuse: implement i_op->atomic_open()Miklos Szeredi2012-07-141-27/+67
| | | | | | | | | | | | | | | | Add an ->atomic_open implementation which replaces the atomic open+create operation implemented via ->create. No functionality is changed. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * nfs: don't use intents for checking atomic openMiklos Szeredi2012-07-141-20/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | is_atomic_open() is now only used by nfs4_lookup_revalidate() to check whether it's okay to skip normal revalidation. It does a racy check for mount read-onlyness and falls back to normal revalidation if the open would fail. This makes little sense now that this function isn't used for determining whether to actually open the file or not. The d_mountpoint() check still makes sense since it is an indication that we might be following a mount and so open may not revalidate the dentry. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * nfs: don't use nd->intent.open.flagsMiklos Szeredi2012-07-141-5/+4Star
| | | | | | | | | | | | | | | | | | Instead check LOOKUP_EXCL in nd->flags, which is basically what the open intent flags were used for. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * nfs: clean up ->create in nfs_rpc_opsMiklos Szeredi2012-07-145-70/+15Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't pass nfs_open_context() to ->create(). Only the NFS4 implementation needed that and only because it wanted to return an open file using open intents. That task has been replaced by ->atomic_open so it is not necessary anymore to pass the context to the create rpc operation. Despite nfs4_proc_create apparently being okay with a NULL context it Oopses somewhere down the call chain. So allocate a context here. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * nfs: implement i_op->atomic_open()Miklos Szeredi2012-07-141-86/+97
| | | | | | | | | | | | | | | | | | Replace NFS4 specific ->lookup implementation with ->atomic_open impelementation and use the generic nfs_lookup for other lookups. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> CC: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: add i_op->atomic_open()Miklos Szeredi2012-07-146-2/+270
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new inode operation which is called on the last component of an open. Using this the filesystem can look up, possibly create and open the file in one atomic operation. If it cannot perform this (e.g. the file type turned out to be wrong) it may signal this by returning NULL instead of an open struct file pointer. i_op->atomic_open() is only called if the last component is negative or needs lookup. Handling cached positive dentries here doesn't add much value: these can be opened using f_op->open(). If the cached file turns out to be invalid, the open can be retried, this time using ->atomic_open() with a fresh dentry. For now leave the old way of using open intents in lookup and revalidate in place. This will be removed once all the users are converted. David Howells noticed that if ->atomic_open() opens the file but does not create it, handle_truncate() will be called on it even if it is not a regular file. Fix this by checking the file type in this case too. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: lookup_open(): expand lookup_hash()Miklos Szeredi2012-07-141-1/+11
| | | | | | | | | | | | | | | | Copy __lookup_hash() into lookup_open(). The next patch will insert the atomic open call just before the real lookup. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: add lookup_open()Miklos Szeredi2012-07-141-38/+61
| | | | | | | | | | | | | | | | | | | | | | Split out lookup + maybe create from do_last(). This is the part under i_mutex protection. The function is called lookup_open() and returns a filp even though the open part is not used yet. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: do_last(): common slow lookupMiklos Szeredi2012-07-141-22/+5Star
| | | | | | | | | | | | | | | | | | Make the slow lookup part of O_CREAT and non-O_CREAT opens common. This allows atomic_open to be hooked into the slow lookup part. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: do_last(): separate O_CREAT specific codeMiklos Szeredi2012-07-141-16/+17
| | | | | | | | | | | | | | | | Check O_CREAT on the slow lookup paths where necessary. This allows the rest to be shared with plain open. Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: do_last(): inline lookup_slow()Miklos Szeredi2012-07-141-2/+15
| | | | | | | | | | | | | | Copy lookup_slow() into do_last(). Signed-off-by: Miklos Szeredi <mszeredi@suse.cz> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * namei.c: let follow_link() do put_link() on failureAl Viro2012-07-141-33/+41
| | | | | | | | | | | | | | | | | | | | | | | | | | | | no need for kludgy "set cookie to ERR_PTR(...) because we failed before we did actual ->follow_link() and want to suppress put_link()", no pointless check in put_link() itself. Callers checked if follow_link() has failed anyway; might as well break out of their loops if that happened, without bothering to call put_link() first. [AV: folded fixes from hch] Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * coda: use list_for_each_entryAl Viro2012-07-141-7/+3Star
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: switch i_dentry/d_alias to hlistAl Viro2012-07-1413-28/+36
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ext4: get rid of open-coded d_find_any_alias()Al Viro2012-07-141-8/+1Star
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * ocfs2: use list_for_each_entry in ocfs2_find_local_alias()Al Viro2012-07-141-11/+5Star
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: unobfuscate affs_fix_dcache()Al Viro2012-07-141-6/+8
| | | | | | | | | | | | and add a comment on what it's doing Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: get rid of open-coded list_for_each_entry()Al Viro2012-07-141-6/+1Star
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * vfs: update documentation on ->i_dentry handlingAl Viro2012-07-141-6/+4Star
| | | | | | | | | | | | | | | | we used to need to clean it in RCU callback freeing an inode; in 3.2 that requirement went away. Unfortunately, it hadn't been reflected in Documentation/filesystems/porting. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * adfs: don't bother with ->i_dentry in ->destroy_inode()Al Viro2012-07-141-1/+0Star
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * cifs: don't bother with ->i_dentry in ->destroy_inode()Al Viro2012-07-141-1/+0Star
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * qnx6: don't bother with ->i_dentry in inode-freeing callbackAl Viro2012-07-141-1/+0Star
| | | | | | | | | | | | | | we'll initialize it in inode_init_always() when we allocate that object again. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * get rid of magic in proc_namespace.cAl Viro2012-07-143-8/+9
| | | | | | | | | | | | | | | | don't rely on proc_mounts->m being the first field; container_of() is there for purpose. No need to bother with ->private, while we are at it - the same container_of will do nicely. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * get rid of ->mnt_longtermAl Viro2012-07-145-72/+26Star
| | | | | | | | | | | | | | | | | | it's enough to set ->mnt_ns of internal vfsmounts to something distinct from all struct mnt_namespace out there; then we can just use the check for ->mnt_ns != NULL in the fast path of mntput_no_expire() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * fs/direct-io.c: adjust suspicious bit operationJulia Lawall2012-07-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | READ is 0, so the result of the bit-and operation is 0. Rewrite with == as done elsewhere in the same file. This problem was found using Coccinelle (http://coccinelle.lip6.fr/). Signed-off-by: Julia Lawall <julia@diku.dk> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: get rid of affs_sync_superArtem Bityutskiy2012-07-143-13/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch makes affs stop using the VFS '->write_super()' method along with the 's_dirt' superblock flag, because they are on their way out. The whole "superblock write-out" VFS infrastructure is served by the 'sync_supers()' kernel thread, which wakes up every 5 (by default) seconds and writes out all dirty superblocks using the '->write_super()' call-back. But the problem with this thread is that it wastes power by waking up the system every 5 seconds, even if there are no diry superblocks, or there are no client file-systems which would need this (e.g., btrfs does not use '->write_super()'). So we want to kill it completely and thus, we need to make file-systems to stop using the '->write_super()' VFS service, and then remove it together with the kernel thread. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: introduce VFS superblock object back-referenceArtem Bityutskiy2012-07-142-0/+2
| | | | | | | | | | | | | | | | | | Add an 'sb' VFS superblock back-reference to the 'struct affs_sb_info' data structure - we will need to find the VFS superblock from a 'struct affs_sb_info' object in the next patch, so this change is jut a preparation. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: stop using lock_superArtem Bityutskiy2012-07-141-2/+3
| | | | | | | | | | | | | | | | | | The VFS's 'lock_super()' and 'unlock_super()' calls are deprecated and unwanted and just wait for a brave knight who'd kill them. This patch makes AFFS stop using them and use the buffer-head's own lock instead. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: re-structure superblock locking a bitArtem Bityutskiy2012-07-141-5/+2Star
| | | | | | | | | | | | | | | | | | AFFS wants to serialize the superblock (the root block in AFFS terms) updates and uses 'lock_super()/unlock_super()' for these purposes. This patch pushes the locking down to the 'affs_commit_super()' from the callers. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: remove useless superblock writeout on remountArtem Bityutskiy2012-07-141-3/+2Star
| | | | | | | | | | | | | | | | | | | | We do not need to write out the superblock from '->remount_fs()' because VFS has already called '->sync_fs()' by this time and the superblock has already been written out. Thus, remove the 'affs_write_super()' infocation from 'affs_remount()'. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: remove useless superblock writeout on unmountArtem Bityutskiy2012-07-141-3/+0Star
| | | | | | | | | | | | | | | | | | | | We do not need to write out the superblock from '->put_super()' because VFS has already called '->sync_fs()' by this time and the superblock has already been written out. Thus, remove the 'affs_commit_super()' infocation from 'affs_put_super()'. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * affs: stop setting bm_flagsArtem Bityutskiy2012-07-141-5/+4Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AFFS stores values '1' and '2' in 'bm_flags', and I fail to see any logic when it prefers one or another. AFFS writes '1' only from '->put_super()', while '->sync_fs()' and '->write_super()' store value '2'. So on the first glance, it looks like we want to have '1' if we unmount. However, this does not really happen in these cases: 1. superblock is written via 'write_super()' then we unmount; 2. we re-mount R/O, then unmount. which are quite typical. I could not find good documentation describing this field, except of one random piece of documentation in the internet which says that -1 means that the root block is valid, which is not consistent with what we have in the Linux AFFS driver. Jan Kara commented on this: "I have some vague recollection that on Amiga boolean was usually encoded as: 0 == false, ~0 == -1 == true. But it has been ages..." Thus, my conclusion is that value of '1' is as good as value of '2' and we can just always use '2'. An Jan Kara suggested to go further: "generally bm_flags handling looks strange. If they are 0, we mount fs read only and thus cannot change them. If they are != 0, we write 2 there. So IMHO if you just removed bm_flags setting, nothing will really happen." So this patch removes the bm_flags setting completely. This makes the "clean" argument of the 'affs_commit_super()' function unneeded, so it is also removed. Signed-off-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
| * Merge tag 'md-3.5-fixes' of git://neil.brown.name/mdLinus Torvalds2012-07-141-1/+2
| |\ | | | | | | | | | | | | | | | | | | Pull use-after-free RAID1 bugfix from NeilBrown. * tag 'md-3.5-fixes' of git://neil.brown.name/md: md/raid1: fix use-after-free bug in RAID1 data-check code.
| | * md/raid1: fix use-after-free bug in RAID1 data-check code.NeilBrown2012-07-091-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This bug has been present ever since data-check was introduce in 2.6.16. However it would only fire if a data-check were done on a degraded array, which was only possible if the array has 3 or more devices. This is certainly possible, but is quite uncommon. Since hot-replace was added in 3.3 it can happen more often as the same condition can arise if not all possible replacements are present. The problem is that as soon as we submit the last read request, the 'r1_bio' structure could be freed at any time, so we really should stop looking at it. If the last device is being read from we will stop looking at it. However if the last device is not due to be read from, we will still check the bio pointer in the r1_bio, but the r1_bio might already be free. So use the read_targets counter to make sure we stop looking for bios to submit as soon as we have submitted them all. This fix is suitable for any -stable kernel since 2.6.16. Cc: stable@vger.kernel.org Reported-by: Arnold Schulz <arnysch@gmx.net> Signed-off-by: NeilBrown <neilb@suse.de>
| * | Merge branch 'timers-urgent-for-linus' of ↵Linus Torvalds2012-07-143-19/+107
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull the leap second fixes from Thomas Gleixner: "It's a rather large series, but well discussed, refined and reviewed. It got a massive testing by John, Prarit and tip. In theory we could split it into two parts. The first two patches f55a6faa3843: hrtimer: Provide clock_was_set_delayed() 4873fa070ae8: timekeeping: Fix leapsecond triggered load spike issue are merely preventing the stuff loops forever issues, which people have observed. But there is no point in delaying the other 4 commits which achieve full correctness into 3.6 as they are tagged for stable anyway. And I rather prefer to have the full fixes merged in bulk than a "prevent the observable wreckage and deal with the hidden fallout later" approach." * 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: hrtimer: Update hrtimer base offsets each hrtimer_interrupt timekeeping: Provide hrtimer update function hrtimers: Move lock held region in hrtimer_interrupt() timekeeping: Maintain ktime_t based offsets for hrtimers timekeeping: Fix leapsecond triggered load spike issue hrtimer: Provide clock_was_set_delayed()
| | * | hrtimer: Update hrtimer base offsets each hrtimer_interruptJohn Stultz2012-07-111-14/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The update of the hrtimer base offsets on all cpus cannot be made atomically from the timekeeper.lock held and interrupt disabled region as smp function calls are not allowed there. clock_was_set(), which enforces the update on all cpus, is called either from preemptible process context in case of do_settimeofday() or from the softirq context when the offset modification happened in the timer interrupt itself due to a leap second. In both cases there is a race window for an hrtimer interrupt between dropping timekeeper lock, enabling interrupts and clock_was_set() issuing the updates. Any interrupt which arrives in that window will see the new time but operate on stale offsets. So we need to make sure that an hrtimer interrupt always sees a consistent state of time and offsets. ktime_get_update_offsets() allows us to get the current monotonic time and update the per cpu hrtimer base offsets from hrtimer_interrupt() to capture a consistent state of monotonic time and the offsets. The function replaces the existing ktime_get() calls in hrtimer_interrupt(). The overhead of the new function vs. ktime_get() is minimal as it just adds two store operations. This ensures that any changes to realtime or boottime offsets are noticed and stored into the per-cpu hrtimer base structures, prior to any hrtimer expiration and guarantees that timers are not expired early. Signed-off-by: John Stultz <johnstul@us.ibm.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1341960205-56738-8-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * | timekeeping: Provide hrtimer update functionThomas Gleixner2012-07-112-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To finally fix the infamous leap second issue and other race windows caused by functions which change the offsets between the various time bases (CLOCK_MONOTONIC, CLOCK_REALTIME and CLOCK_BOOTTIME) we need a function which atomically gets the current monotonic time and updates the offsets of CLOCK_REALTIME and CLOCK_BOOTTIME with minimalistic overhead. The previous patch which provides ktime_t offsets allows us to make this function almost as cheap as ktime_get() which is going to be replaced in hrtimer_interrupt(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: John Stultz <johnstul@us.ibm.com> Link: http://lkml.kernel.org/r/1341960205-56738-7-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * | hrtimers: Move lock held region in hrtimer_interrupt()Thomas Gleixner2012-07-111-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to update the base offsets from this code and we need to do that under base->lock. Move the lock held region around the ktime_get() calls. The ktime_get() calls are going to be replaced with a function which gets the time and the offsets atomically. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@vger.kernel.org Signed-off-by: John Stultz <johnstul@us.ibm.com> Link: http://lkml.kernel.org/r/1341960205-56738-6-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * | timekeeping: Maintain ktime_t based offsets for hrtimersThomas Gleixner2012-07-111-2/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to update the hrtimer clock offsets from the hrtimer interrupt context. To avoid conversions from timespec to ktime_t maintain a ktime_t based representation of those offsets in the timekeeper. This puts the conversion overhead into the code which updates the underlying offsets and provides fast accessible values in the hrtimer interrupt. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: John Stultz <johnstul@us.ibm.com> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1341960205-56738-4-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * | timekeeping: Fix leapsecond triggered load spike issueJohn Stultz2012-07-111-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The timekeeping code misses an update of the hrtimer subsystem after a leap second happened. Due to that timers based on CLOCK_REALTIME are either expiring a second early or late depending on whether a leap second has been inserted or deleted until an operation is initiated which causes that update. Unless the update happens by some other means this discrepancy between the timekeeping and the hrtimer data stays forever and timers are expired either early or late. The reported immediate workaround - $ data -s "`date`" - is causing a call to clock_was_set() which updates the hrtimer data structures. See: http://www.sheeri.com/content/mysql-and-leap-second-high-cpu-and-fix Add the missing clock_was_set() call to update_wall_time() in case of a leap second event. The actual update is deferred to softirq context as the necessary smp function call cannot be invoked from hard interrupt context. Signed-off-by: John Stultz <johnstul@us.ibm.com> Reported-by: Jan Engelhardt <jengelh@inai.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1341960205-56738-3-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
| | * | hrtimer: Provide clock_was_set_delayed()John Stultz2012-07-112-1/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | clock_was_set() cannot be called from hard interrupt context because it calls on_each_cpu(). For fixing the widely reported leap seconds issue it is necessary to call it from hard interrupt context, i.e. the timer tick code, which does the timekeeping updates. Provide a new function which denotes it in the hrtimer cpu base structure of the cpu on which it is called and raise the hrtimer softirq. We then execute the clock_was_set() notificiation from softirq context in run_hrtimer_softirq(). The hrtimer softirq is rarely used, so polling the flag there is not a performance issue. [ tglx: Made it depend on CONFIG_HIGH_RES_TIMERS. We really should get rid of all this ifdeffery ASAP ] Signed-off-by: John Stultz <johnstul@us.ibm.com> Reported-by: Jan Engelhardt <jengelh@inai.de> Reviewed-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Acked-by: Prarit Bhargava <prarit@redhat.com> Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/1341960205-56738-2-git-send-email-johnstul@us.ibm.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>