summaryrefslogtreecommitdiffstats
path: root/fs/ceph/caps.c
Commit message (Collapse)AuthorAgeFilesLines
...
* | include cleanup: Update gfp.h and slab.h includes to prepare for breaking ↵Tejun Heo2010-03-301-0/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo <tj@kernel.org> Guess-its-ok-by: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
* ceph: only release unused caps with mds requestsSage Weil2010-03-231-3/+10
| | | | | | | | | | | | | | We were releasing used caps (e.g. FILE_CACHE) from encode_inode_release with MDS requests (e.g. setattr). We don't carry refs on most caps, so this code worked most of the time, but for setattr (utimes) we try to drop Fscr. This causes cap state to get slightly out of sync with reality, and may result in subsequent mds revoke messages getting ignored. Fix by only releasing unused caps. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: clean up handle_cap_grant, handle_caps wrt session mutexSage Weil2010-03-231-32/+25Star
| | | | | | | | | | | Drop session mutex unconditionally in handle_cap_grant, and do the check_caps from the handle_cap_grant helper. This avoids using a magic return value. Also avoid using a flag variable in the IMPORT case and call check_caps at the appropriate point. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix session locking in handle_caps, ceph_check_capsSage Weil2010-03-231-5/+9
| | | | | | | | | | | | | | | | | Passing a session pointer to ceph_check_caps() used to mean it would leave the session mutex locked. That wasn't always possible if it wasn't passed CHECK_CAPS_AUTHONLY. If could unlock the passed session and lock a differet session mutex, which was clearly wrong, and also emitted a warning when it a racing CPU retook it and we did an unlock from the wrong context. This was only a problem when there was more than one MDS. First, make ceph_check_caps unconditionally drop the session mutex, so that it is free to lock other sessions as needed. Then adjust the one caller that passes in a session (handle_cap_grant) accordingly. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: drop unnecessary WARN_ON in caps migrationSage Weil2010-03-231-2/+1Star
| | | | | | | If we don't have the exported cap it's because we already released it. No need to WARN. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: implemented caps should always be superset of issued capsSage Weil2010-03-211-0/+2
| | | | | | | | Added assertion, and cleared one case where the implemented caps were not following the issued caps. Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: update for write_inode API changeStephen Rothwell2010-03-051-1/+3
| | | | | Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix flush_dirty_caps race with caps migrationSage Weil2010-03-021-7/+38
| | | | | | | | | | | | The flush_dirty_caps() used to loop over the first entry of the cap_dirty dirty list on the assumption that after calling ceph_check_caps() it would be removed from the list. This isn't true for caps that are being migrated between MDSs, where we've received the EXPORT but not the IMPORT. Instead, do a safe list iteration, and pin the next inode on the list via the CEPH_I_NOFLUSH flag. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: include migrating caps in issued setSage Weil2010-03-021-1/+1
| | | | | | | We should include caps that are mid-migration (we've received the EXPORT, but not the IMPORT) in the issued caps set. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: return EBADF if waiting for caps on closed fileSage Weil2010-03-021-3/+6
| | | | | | | | | | | | Verify the file is actually open for the given caps when we are waiting for caps. This ensures we will wake up and return EBADF if another thread closes the file out from under us. Note that EBADF is also the correct return code from write(2) when called on a file handle opened for reading (although the vfs should catch that). Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix snaptrace decoding on cap migration between mdsSage Weil2010-03-021-2/+3
| | | | | | | This was simply broken. Apparently at some point we thought about putting the snaptrace in the middle section, but didn't. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: drop messages on unregistered mds sessions; cleanupSage Weil2010-02-231-1/+1
| | | | | | | | | | Verify the mds session is currently registered before handling incoming messages. Clean up message handlers to pull mds out of session->s_mds instead of less trustworthy src field. Clean up con_{get,put} debug output. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix comments, locking in destroy_inodeSage Weil2010-02-231-7/+4Star
| | | | | | | | The destroy_inode path needs no inode locks since there are no inode references. Update __ceph_remove_cap comment to reflect that it is called without cap->session->s_mutex in this case. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: cleanup redundant code in handle_cap_grantYehuda Sadeh2010-02-191-5/+1Star
| | | | | | | | There is no state in local vars that requires us to loop after temporarily dropping i_lock. Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix check for invalidate_mapping_pages successSage Weil2010-02-191-32/+50
| | | | | | | | | | | We need to know whether there was any page left behind, and not the return value (the total number of pages invalidated). Look at the mapping to see if we were successful or not. Move it all into a helper to simplify the two callers. Signed-off-by: Yehuda Sadeh <yehuda@hq.newdream.net> Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix iterate_caps removal raceSage Weil2010-02-171-23/+24
| | | | | | | | | | | | | | | | | | | | | We need to be able to iterate over all caps on a session with a possibly slow callback on each cap. To allow this, we used to prevent cap reordering while we were iterating. However, we were not safe from races with removal: removing the 'next' cap would make the next pointer from list_for_each_entry_safe be invalid, and cause a lock up or similar badness. Instead, we keep an iterator pointer in the session pointing to the current cap. As before, we avoid reordering. For removal, if the cap isn't the current cap we are iterating over, we are fine. If it is, we clear cap->ci (to mark the cap as pending removal) but leave it in the session list. In iterate_caps, we can safely finish removal and get the next cap pointer. While we're at it, clean up put_cap to not take a cap reservation context, as it was never used. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: clean up readdir caps reservationSage Weil2010-02-171-6/+17
| | | | | | | | | Use a global counter for the minimum number of allocated caps instead of hard coding a check against readdir_max. This takes into account multiple client instances, and avoids examining the superblock mount options when a cap is dropped. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: remove bogus invalidate_mapping_pagesSage Weil2010-02-111-6/+0Star
| | | | | | | | We were invalidating mapping pages when dropping FILE_CACHE in __send_cap(). But ceph_check_caps attempts to invalidate already, and also checks for success, so we should never get to this point. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: invalidate pages even if truncate is pendingSage Weil2010-02-111-1/+0Star
| | | | | | | There is no reason not to invalidate pages when a truncate is pending. Both throw out page cache pages. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: cleanup async writeback, truncation, invalidate helpersSage Weil2010-02-111-17/+8Star
| | | | | | | Grab inode ref in helper. Make work functions static, with consistent naming. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: do not retain caps that are being revokedSage Weil2010-02-111-4/+6
| | | | | | Never retain caps in __send_cap() that are being revoked. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: cap revocation fixesSage Weil2010-02-111-5/+17
| | | | | | | | | Try to invalidate pages in ceph_check_caps() if FILE_CACHE is being revoked. If we fail, queue an immediate async invalidate if FILE_CACHE is being revoked. (If it's not being revoked, we just queue the caps for later evaluation later, as per the old behavior.) Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: include transaction id in ceph_msg_header (protocol change)Sage Weil2009-12-231-8/+8
| | | | | | | | | | Many (most?) message types include a transaction id. By including it in the fixed size header, we always have it available even when we are unable to allocate memory for the (larger, variable sized) message body. This will allow us to error out the appropriate request instead of (silently) dropping the reply. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: do not touch_caps while iterating over caps listSage Weil2009-12-231-3/+8
| | | | | | | | | | Avoid confusing iterate_session_caps(), flag the session while we are iterating so that __touch_cap does not rearrange items on the list. All other modifiers of session->s_caps do so under the protection of s_mutex. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: hex dump corrupt server data to KERN_DEBUGSage Weil2009-12-221-0/+1
| | | | | | Also, print fsid using standard format, NOT hex dump. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: whitespace cleanupSage Weil2009-12-031-1/+1
| | | | Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: fix page invalidation deadlockSage Weil2009-11-131-2/+2
| | | | | | | | | | | We occasionally want to make a best-effort attempt to invalidate cache pages without fear of blocking. If this fails, we fall back to an async invalidate in another thread. Use invalidate_mapping_pages instead of invalidate_inode_page2, as that will skip locked pages, and not deadlock. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: remove recon_gen logicSage Weil2009-11-111-11/+1Star
| | | | | | | | We don't get an explicit affirmative confirmation that our caps reconnect, nor do we necessarily want to pay that cost. So, take all this code out for now. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: do not confuse stale and dead (unreconnected) capsSage Weil2009-11-091-5/+15
| | | | | | | | | | | | We were using the cap_gen to track both stale caps (caps that timed out due to temporarily losing touch with the mds) and dead caps that did not reconnect after an MDS failure. Introduce a recon_gen counter to track reconnections to restarted MDSs and kill dead caps based on that instead. Rename gen to cap_gen while we're at it to make it more clear which is which. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: allocate and parse mount args before client instanceSage Weil2009-10-271-2/+2
| | | | | | | | This simplifies much of the error handling during mount. It also means that we have the mount args before client creation, and we can initialize based on those options. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: move dirty caps code aroundSage Weil2009-10-161-33/+37
| | | | | | Cleanup only. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: flush dirty caps via the cap_dirty listSage Weil2009-10-161-23/+53
| | | | | | | | | | | | | | | | | | Previously we were flushing dirty caps by passing an extra flag when traversing the delayed caps list. Besides being a bit ugly, that can also miss caps that are dirty but didn't result in a cap requeue: notably, mark_caps_dirty(). Separate the flushing into a separate helper, and traverse the cap_dirty list. This also brings i_dirty_item in line with i_dirty_caps: we are on the list IFF caps != 0. We carry an inode ref IFF dirty_caps|flushing_caps != 0. Lose the unused return value from __ceph_mark_caps_dirty(). Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: move generic flushing code into helperSage Weil2009-10-141-22/+21Star
| | | | | | | Both callers of __mark_caps_flushing() do the same work; move it into the helper. Signed-off-by: Sage Weil <sage@newdream.net>
* ceph: capability managementSage Weil2009-10-061-0/+2830
The Ceph metadata servers control client access to inode metadata and file data by issuing capabilities, granting clients permission to read and/or write both inode field and file data to OSDs (storage nodes). Each capability consists of a set of bits indicating which operations are allowed. If the client holds a *_SHARED cap, the client has a coherent value that can be safely read from the cached inode. In the case of a *_EXCL (exclusive) or FILE_WR capabilities, the client is allowed to change inode attributes (e.g., file size, mtime), note its dirty state in the ceph_cap, and asynchronously flush that metadata change to the MDS. In the event of a conflicting operation (perhaps by another client), the MDS will revoke the conflicting client capabilities. In order for a client to cache an inode, it must hold a capability with at least one MDS server. When inodes are released, release notifications are batched and periodically sent en masse to the MDS cluster to release server state. Signed-off-by: Sage Weil <sage@newdream.net>