summaryrefslogtreecommitdiffstats
path: root/fs
Commit message (Collapse)AuthorAgeFilesLines
...
| * | | NFS: Use correct variable for page bounds checkingBryan Schumaker2011-04-131-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While decoding a secinfo reply, I store the list of supported sec flavors on a page accessible through res->flavors. Before reading each new flavor, I do some math to determine if there is enough space left on this page, and I break out of my read look if there isn't. In order to perform this check correctly, I need to use the address of res->flavors, rather than the address of res. When this loop was broken early I lied to the caller and told them that the entire list had been decoded. This could lead to problems if the caller tries to use any the garbage data claiming to be a valid sec flavor. I fixed this by using res->flavors->num_flavors as a counter, incrementing it every time a sec flavor is successfully decoded. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: don't negotiate when user specifies sec flavorBryan Schumaker2011-04-132-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We were always attempting sec flavor negotiation, even if the user told us a specific sec flavor to use. If that sec flavor fails, we should return an error rather than continuing with sec flavor negotiation. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Attempt mount with default sec flavor firstBryan Schumaker2011-04-131-8/+16
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | nfs4_lookup_root() is already configured to use either RPC_AUTH_UNIX or a user specified flavor (through -o sec=<whatever>). We should use this flavor first, and only attempt negotiation if it fails with -EPERM. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: flav_array honors NFS_MAX_SECFLAVORSBryan Schumaker2011-04-131-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | NFS_MAX_SECFLAVORS should already take into account RPC_AUTH_UNIX and RPC_AUTH_NULL, so we don't need to set aside extra slots for them. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Fix infinite loop in gss_create_upcall()Bryan Schumaker2011-04-131-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There can be an infinite loop if gss_create_upcall() is called without the userspace program running. To prevent this, we return -EACCES if we notice that pipe_version hasn't changed (indicating that the pipe has not been opened). Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | Don't mark_inode_dirty_sync() while holding lockWeston Andros Adamson2011-04-131-1/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | mark_inode_dirty_sync() grabs the same inode lock! race conditions between holding the lock in pnfs_set_layoutcommit() and in mark_inode_dirty_sync() can result in a second call to pnfs_layoutcommit_inode(), but this will be a noop as NFS_INO_LAYOUTCOMMIT won't be set in the second call Signed-off-by: Weston Andros Adamson <dros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Get rid of pointless test in nfs_commit_doneTrond Myklebust2011-04-131-2/+1Star
| | | | | | | | | | | | | | | | Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Remove unused argument from nfs_find_best_sec()Bryan Schumaker2011-04-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The inode was used in an earlier version of the code, but it isn't used anymore. Signed-off-by: Bryan Schumaker <bjschuma@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Eliminate duplicate call to nfs_mark_request_dirtyTrond Myklebust2011-04-131-1/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | We only need to call nfs_mark_request_dirty() once in nfs_writepage_setup(). Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
| * | | NFS: Remove dead code from nfs_fs_mount()Jesper Juhl2011-04-131-2/+1Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In fs/nfs/super.c::nfs_fs_mount() we test for a NULL 'data': ... if (data == NULL || mntfh == NULL) goto out_free_fh; ... and then further down in the function we test 'data' again: ... nfs_fscache_get_super_cookie( s, data ? data->fscache_uniq : NULL, NULL); ... this second check is just dead code since there is no way 'data' could possibly be NULL here. We also rely on a non-NULL 'data' in more than one location between these two tests, further proving the point that the second test is bogus. This patch removes the dead code. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
* | | | vfs: avoid large kmalloc()s for the fdtableAndrew Morton2011-04-281-7/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Azurit reports large increases in system time after 2.6.36 when running Apache. It was bisected down to a892e2d7dcdfa6c76e6 ("vfs: use kmalloc() to allocate fdmem if possible"). That patch caused the vfs to use kmalloc() for very large allocations and this is causing excessive work (and presumably excessive reclaim) within the page allocator. Fix it by falling back to vmalloc() earlier - when the allocation attempt would have been considered "costly" by reclaim. Reported-by: azurIt <azurit@pobox.sk> Tested-by: azurIt <azurit@pobox.sk> Acked-by: Changli Gao <xiaosuo@gmail.com> Cc: Americo Wang <xiyou.wangcong@gmail.com> Cc: Jiri Slaby <jslaby@suse.cz> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | Revert wrong fixes for common misspellingsLucas De Marchi2011-04-272-2/+2
| |/ / |/| | | | | | | | | | | | | | | | | These changes were incorrectly fixed by codespell. They were now manually corrected. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
* | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstableLinus Torvalds2011-04-266-15/+32
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: Btrfs: cleanup error handling in inode.c Btrfs: put the right bio if we have an error Btrfs: free bitmaps properly when evicting the cache Btrfs: Free free_space item properly in btrfs_trim_block_group() btrfs: add missing spin_unlock to a rare exit path Btrfs: check return value of kmalloc() btrfs: fix wrong allocating flag when reading page Btrfs: fix missing mutex_unlock in btrfs_del_dir_entries_in_log()
| * | | Btrfs: cleanup error handling in inode.cTsutomu Itoh2011-04-261-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The error processing of several places is changed like setting the error number only at the error. Signed-off-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: put the right bio if we have an errorJosef Bacik2011-04-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In btrfs_submit_direct_hook if the first btrfs_map_block fails we need to put the orig_bio, not bio. Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: free bitmaps properly when evicting the cacheJosef Bacik2011-04-261-4/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If our space cache is wrong, we do the right thing and free up everything that we loaded, however we don't reset the total_bitmaps counter or the thresholds or anything. So in btrfs_remove_free_space_cache make sure to call free_bitmap() if it's a bitmap, this will keep us from panicing when we check to make sure we don't have too many bitmaps. Thanks, Signed-off-by: Josef Bacik <josef@redhat.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: Free free_space item properly in btrfs_trim_block_group()Li Zefan2011-04-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since commit dc89e9824464e91fa0b06267864ceabe3186fd8b, we've changed to use a specific slab for alocation of free_space items. Signed-off-by: Li Zefan <lizf@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | btrfs: add missing spin_unlock to a rare exit pathDavid Sterba2011-04-261-0/+1
| | | | | | | | | | | | | | | | | | | | Signed-off-by: David Sterba <dsterba@suse.cz> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: check return value of kmalloc()Tsutomu Itoh2011-04-262-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The check on the return value of kmalloc() is added to some places. Signed-off-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | btrfs: fix wrong allocating flag when reading pageItaru Kitayama2011-04-261-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | the space cache use extent_readpages() to read free space information, so we can not use GFP_KERNEL flag to allocate memory, or it may lead to deadlock. Signed-off-by: Itaru Kitayama <kitayama@cl.bb4u.ne.jp> Signed-off-by: Miao Xie <miaox@cn.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
| * | | Btrfs: fix missing mutex_unlock in btrfs_del_dir_entries_in_log()Tsutomu Itoh2011-04-261-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is necessary to unlock mutex_lock before it return an error when btrfs_alloc_path() fails. Signed-off-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com> Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | | | Merge branch 'for-linus' of ↵Linus Torvalds2011-04-261-0/+10
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-unstable: Btrfs: do some plugging in the submit_bio threads
| * | | | Btrfs: do some plugging in the submit_bio threadsChris Mason2011-04-201-0/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The Btrfs submit bio threads have a small number of threads responsible for pushing down bios we've collected for a large number of devices. Since we do all the bios for a single device at once, we want to make sure we unplug and send down the bios for each device as we're done processing them. The new plugging API removed the btrfs code to unplug while processing bios, this adds it back with the new API. Signed-off-by: Chris Mason <chris.mason@oracle.com>
* | | | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6Linus Torvalds2011-04-261-2/+3
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/sfrench/cifs-2.6: CIFS: Fix memory over bound bug in cifs_parse_mount_options
| * | | | | CIFS: Fix memory over bound bug in cifs_parse_mount_optionsPavel Shilovsky2011-04-211-2/+3
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While password processing we can get out of options array bound if the next character after array is delimiter. The patch adds a check if we reach the end. Signed-off-by: Pavel Shilovsky <piastry@etersoft.ru> Reviewed-by: Jeff Layton <jlayton@redhat.com> Signed-off-by: Steve French <sfrench@us.ibm.com>
* | | | | Merge branch 'for-linus' of ↵Linus Torvalds2011-04-267-79/+128
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/ecryptfs/ecryptfs-2.6 * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ecryptfs/ecryptfs-2.6: eCryptfs: Flush dirty pages in setattr eCryptfs: Handle failed metadata read in lookup eCryptfs: Add reference counting to lower files eCryptfs: dput dentries returned from dget_parent eCryptfs: Remove extra d_delete in ecryptfs_rmdir
| * | | | | eCryptfs: Flush dirty pages in setattrTyler Hicks2011-04-261-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After 57db4e8d73ef2b5e94a3f412108dff2576670a8a changed eCryptfs to write-back caching, eCryptfs page writeback updates the lower inode times due to the use of vfs_write() on the lower file. To preserve inode metadata changes, such as 'cp -p' does with utimensat(), we need to flush all dirty pages early in ecryptfs_setattr() so that the user-updated lower inode metadata isn't clobbered later in writeback. https://bugzilla.kernel.org/show_bug.cgi?id=33372 Reported-by: Rocko <rockorequin@hotmail.com> Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
| * | | | | eCryptfs: Handle failed metadata read in lookupTyler Hicks2011-04-264-16/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When failing to read the lower file's crypto metadata during a lookup, eCryptfs must continue on without throwing an error. For example, there may be a plaintext file in the lower mount point that the user wants to delete through the eCryptfs mount. If an error is encountered while reading the metadata in lookup(), the eCryptfs inode's size could be incorrect. We must be sure to reread the plaintext inode size from the metadata when performing an open() or setattr(). The metadata is already being read in those paths, so this adds minimal performance overhead. This patch introduces a flag which will track whether or not the plaintext inode size has been read so that an incorrect i_size can be fixed in the open() or setattr() paths. https://bugs.launchpad.net/bugs/509180 Cc: <stable@kernel.org> Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
| * | | | | eCryptfs: Add reference counting to lower filesTyler Hicks2011-04-266-59/+92
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For any given lower inode, eCryptfs keeps only one lower file open and multiplexes all eCryptfs file operations through that lower file. The lower file was considered "persistent" and stayed open from the first lookup through the lifetime of the inode. This patch keeps the notion of a single, per-inode lower file, but adds reference counting around the lower file so that it is closed when not currently in use. If the reference count is at 0 when an operation (such as open, create, etc.) needs to use the lower file, a new lower file is opened. Since the file is no longer persistent, all references to the term persistent file are changed to lower file. Locking is added around the sections of code that opens the lower file and assign the pointer in the inode info, as well as the code the fputs the lower file when all eCryptfs users are done with it. This patch is needed to fix issues, when mounted on top of the NFSv3 client, where the lower file is left silly renamed until the eCryptfs inode is destroyed. Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
| * | | | | eCryptfs: dput dentries returned from dget_parentTyler Hicks2011-04-261-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Call dput on the dentries previously returned by dget_parent() in ecryptfs_rename(). This is needed for supported eCryptfs mounts on top of the NFSv3 client. Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
| * | | | | eCryptfs: Remove extra d_delete in ecryptfs_rmdirTyler Hicks2011-04-261-2/+0Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | vfs_rmdir() already calls d_delete() on the lower dentry. That was being duplicated in ecryptfs_rmdir() and caused a NULL pointer dereference when NFSv3 was the lower filesystem. Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com>
* | | | | | add hlist_bl_lock/unlock helpersChristoph Hellwig2011-04-262-20/+8Star
|/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that the whole dcache_hash_bucket crap is gone, go all the way and also remove the weird locking layering violations for locking the hash buckets. Add hlist_bl_lock/unlock helpers to move the locking into the list abstraction instead of requiring each caller to open code it. After all allowing for the bit locks is the whole point of these helpers over the plain hlist variant. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | | Merge branch 'dcache-cleanup'Linus Torvalds2011-04-241-26/+16Star
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | * dcache-cleanup: vfs: get rid of insane dentry hashing rules
| * | | | | vfs: get rid of insane dentry hashing rulesLinus Torvalds2011-04-241-26/+16Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dentry hashing rules have been really quite complicated for a long while, in odd ways. That made functions like __d_drop() very fragile and non-obvious. In particular, whether a dentry was hashed or not was indicated with an explicit DCACHE_UNHASHED bit. That's despite the fact that the hash abstraction that the dentries use actually have a 'is this entry hashed or not' model (which is a simple test of the 'pprev' pointer). The reason that was done is because we used the normal 'is this entry unhashed' model to mark whether the dentry had _ever_ been hashed in the dentry hash tables, and that logic goes back many years (commit b3423415fbc2: "dcache: avoid RCU for never-hashed dentries"). That, in turn, meant that __d_drop had totally different unhashing logic for the dentry hash table case and for the anonymous dcache case, because in order to use the "is this dentry hashed" logic as a flag for whether it had ever been on the RCU hash table, we had to unhash such a dentry differently so that we'd never think that it wasn't 'unhashed' and wouldn't be free'd correctly. That's just insane. It made the logic really hard to follow, when there were two different kinds of "unhashed" states, and one of them (the one that used "list_bl_unhashed()") really had nothing at all to do with being unhashed per se, but with a very subtle lifetime rule instead. So turn all of it around, and make it logical. Instead of having a DENTRY_UNHASHED bit in d_flags to indicate whether the dentry is on the hash chains or not, use the hash chain unhashed logic for that. Suddenly "d_unhashed()" just uses "list_bl_unhashed()", and everything makes sense. And for the lifetime rule, just use an explicit DENTRY_RCUACCEES bit. If we ever insert the dentry into the dentry hash table so that it is visible to RCU lookup, we mark it DENTRY_RCUACCESS to show that it now needs the RCU lifetime rules. Now suddently that test at dentry free time makes sense too. And because unhashing now is sane and doesn't depend on where the dentry got unhashed from (because the dentry hash chain details doesn't have some subtle side effects), we can re-unify the __d_drop() logic and use common code for the unhashing. Also fix one more open-coded hash chain bit_spin_lock() that I missed in the previous chain locking cleanup commit. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | | | Merge branch 'for-linus' of git://git.infradead.org/ubifs-2.6Linus Torvalds2011-04-242-8/+47
|\ \ \ \ \ \ | |/ / / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://git.infradead.org/ubifs-2.6: UBIFS: fix master node recovery UBIFS: fix false assertion warning in case of I/O failures UBIFS: fix false space checking failure
| * | | | | UBIFS: fix master node recoveryArtem Bityutskiy2011-04-211-0/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes the following symptoms: 1. Unmount UBIFS cleanly. 2. Start mounting UBIFS R/W and have a power cut immediately 3. Start mounting UBIFS R/O, this succeeds 4. Try to re-mount UBIFS R/W - this fails immediately or later on, because UBIFS will write the master node to the flash area which has been written before. The analysis of the problem: 1. UBIFS is unmounted cleanly, both copies of the master node are clean. 2. UBIFS is being mounter R/W, starts changing master node copy 1, and a power cut happens. The copy N1 becomes corrupted. 3. UBIFS is being mounted R/O. It notices the copy N1 is corrupted and reads copy N2. Copy N2 is clean. 4. Because of R/O mode, UBIFS cannot recover copy 1. 5. The mount code (ubifs_mount()) sees that the master node is clean, so it decides that no recovery is needed. 6. We are re-mounting R/W. UBIFS believes no recovery is needed and starts updating the master node, but copy N1 is still corrupted and was not recovered! Fix this problem by marking the master node as dirty every time we recover it and we are in R/O mode. This forces further recovery and the UBIFS cleans-up the corruptions and recovers the copy N1 when re-mounting R/W later. Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Cc: stable@kernel.org
| * | | | | UBIFS: fix false assertion warning in case of I/O failuresArtem Bityutskiy2011-04-211-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When UBIFS switches to R/O mode because it detects I/O failures, then when we unmount, we still may have allocated budget, and the assertions which verify that we have not budget will fire. But it is expected to have the budget in case of I/O failures, so the assertion warnings will be false. Suppress them for the I/O failure case. Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
| * | | | | UBIFS: fix false space checking failureArtem Bityutskiy2011-04-201-4/+15
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes UBIFS mount failure when the debugging support is enabled, we are recovering from a power cut, we were first mounter R/O and we are re-mounting R/W. In this case we should not assume that the amount of free space before we have re-mounted R/W and after are equivalent, because when we have mounted R/O the file-system is in a non-committed state so the amount of free space is slightly smaller, due to the fact that we cannot predict the amount of free space precisely before we commit. This patch fixes the issue by skipping the debugging check in case of recovery. This issue was reported by Caizhiyong <caizhiyong@huawei.com> here: http://thread.gmane.org/gmane.linux.drivers.mtd/34350/focus=34387 Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Reported-by: Caizhiyong <caizhiyong@huawei.com> Cc: stable@kernel.org [2.6.30+]
* | | | | vfs: get rid of 'struct dcache_hash_bucket' abstractionLinus Torvalds2011-04-241-24/+21Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's a useless abstraction for 'hlist_bl_head', and it doesn't actually help anything - quite the reverse. All the users end up having to know about the hlist_bl_head details anyway, using 'struct hlist_bl_node *' etc. So it just makes the code look confusing. And the cost of it is extra '&b->head' syntactic noise, but more importantly it spuriously makes the hash table dentry list look different from the per-superblock DCACHE_DISCONNECTED dentry list. As a result, the code ended up using ad-hoc locking for one case and special helper functions for what is really another totally identical case in the very same function. Make it all look and work the same. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | | Merge branch 'for-linus' of git://oss.sgi.com/xfs/xfsLinus Torvalds2011-04-211-1/+3
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | * 'for-linus' of git://oss.sgi.com/xfs/xfs: xfs: fix duplicate message output
| * | | | | xfs: fix duplicate message outputDave Chinner2011-04-201-1/+3
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 957935dc ("xfs: fix xfs_debug warnings" broke the logic in __xfs_printk(). Instead of only printing one of two possible output strings based on whether the fs has a name or not, it outputs both. Fix it to only output one message again. Signed-off-by: Dave Chinner <dchinner@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Alex Elder <aelder@sgi.com>
* | | | | vfs: Pass setxattr(2) flags properlyJan Kara2011-04-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | For some reason generic_setxattr() did not pass flags (XATTR_CREATE, XATTR_REPLACE) to the filesystem specific helper. This caused that setxattr(2) syscall just ignored these flags. Fix the bug by passing flags correctly. Signed-off-by: Jan Kara <jack@suse.cz> Acked-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | | | | Merge branch 'for-2.6.39' of git://linux-nfs.org/~bfields/linuxLinus Torvalds2011-04-212-2/+10
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * 'for-2.6.39' of git://linux-nfs.org/~bfields/linux: Open with O_CREAT flag set fails to open existing files on non writable directories nfsd4: Fix filp leak nfsd4: fix struct file leak on delegation
| * | | | | Open with O_CREAT flag set fails to open existing files on non writable ↵Sachin Prabhu2011-04-201-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | directories An open on a NFS4 share using the O_CREAT flag on an existing file for which we have permissions to open but contained in a directory with no write permissions will fail with EACCES. A tcpdump shows that the client had set the open mode to UNCHECKED which indicates that the file should be created if it doesn't exist and encountering an existing flag is not an error. Since in this case the file exists and can be opened by the user, the NFS server is wrong in attempting to check create permissions on the parent directory. The patch adds a conditional statement to check for create permissions only if the file doesn't exist. Signed-off-by: Sachin S. Prabhu <sprabhu@redhat.com> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * | | | | nfsd4: Fix filp leakOGAWA Hirofumi2011-04-191-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 23fcf2ec93fb8573a653408316af599939ff9a8e (nfsd4: fix oops on lock failure) The above patch breaks free path for stp->st_file. If stp was inserted into sop->so_stateids, we have to free stp->st_file refcount. Because stp->st_file refcount itself is taken whether or not any refcounts are taken on the stp->st_file->fi_fds[]. Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp> Cc: stable@kernel.org Signed-off-by: J. Bruce Fields <bfields@redhat.com>
| * | | | | nfsd4: fix struct file leak on delegationJ. Bruce Fields2011-04-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduced by acfdf5c383b38f7f4dddae41b97c97f1ae058f49. Cc: stable@kernel.org Reported-by: Gerhard Heift <ml-nfs-linux-20110412-ef47@gheift.de> Signed-off-by: J. Bruce Fields <bfields@redhat.com>
* | | | | | Merge git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-fixesLinus Torvalds2011-04-199-34/+111
|\ \ \ \ \ \ | |_|/ / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * git://git.kernel.org/pub/scm/linux/kernel/git/steve/gfs2-2.6-fixes: GFS2: filesystem hang caused by incorrect lock order GFS2: Don't try to deallocate unlinked inodes when mounted ro GFS2: directly write blocks past i_size GFS2: write_end error path fails to unlock transaction lock
| * | | | | GFS2: filesystem hang caused by incorrect lock orderBob Peterson2011-04-186-21/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch fixes a deadlock in GFS2 where two processes are trying to reclaim an unlinked dinode: One holds the inode glock and calls gfs2_lookup_by_inum trying to look up the inode, which it can't, due to I_FREEING. The other has set I_FREEING from vfs and is at the beginning of gfs2_delete_inode waiting for the glock, which is held by the first. The solution is to add a new non_block parameter to the gfs2_iget function that causes it to return -ENOENT if the inode is being freed. Signed-off-by: Bob Peterson <rpeterso@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>
| * | | | | GFS2: Don't try to deallocate unlinked inodes when mounted roSteven Whitehouse2011-04-182-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This adds a couple of missing tests to avoid read-only nodes from attempting to deallocate unlinked inodes. Signed-off-by: Steven Whitehouse <swhiteho@redhat.com> Reported-by: Michel Andre de la Porte <madelaporte@ubi.com>
| * | | | | GFS2: directly write blocks past i_sizeBenjamin Marzinski2011-04-181-10/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GFS2 was relying on the writepage code to write out the zeroed data for fallocate. However, with FALLOC_FL_KEEP_SIZE set, this may be past i_size. If it is, it will be ignored. To work around this, gfs2 now calls write_dirty_buffer directly on the buffer_heads when FALLOC_FL_KEEP_SIZE is set, and it's writing past i_size. This version is just a cleanup of my last version Signed-off-by: Benjamin Marzinski <bmarzins@redhat.com> Signed-off-by: Steven Whitehouse <swhiteho@redhat.com>