summaryrefslogtreecommitdiffstats
path: root/include/asm-powerpc/page_64.h
Commit message (Collapse)AuthorAgeFilesLines
* [POWERPC] Use 1TB segmentsPaul Mackerras2007-10-121-1/+7
| | | | | | | | | | | | | | | | | | | | This makes the kernel use 1TB segments for all kernel mappings and for user addresses of 1TB and above, on machines which support them (currently POWER5+, POWER6 and PA6T). We detect that the machine supports 1TB segments by looking at the ibm,processor-segment-sizes property in the device tree. We don't currently use 1TB segments for user addresses < 1T, since that would effectively prevent 32-bit processes from using huge pages unless we also had a way to revert to using 256MB segments. That would be possible but would involve extra complications (such as keeping track of which segment size was used when HPTEs were inserted) and is not addressed here. Parts of this patch were originally written by Ben Herrenschmidt. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] iSeries: Clean up lparmap messStephen Rothwell2007-08-221-1/+1
| | | | | | | | | | We need to have xLparMap in head_64.S so that it is at a fixed address (because the linker will not resolve (address & 0xffffffff) for us). But the assembler miscalculates the KERNEL_VSID() expressions. So put the confusing expressions into asm-offsets.c. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Tidy up CONFIG_PPC_MM_SLICES codeStephen Rothwell2007-08-171-0/+7
| | | | | | | This removes some of the #ifdefs from .c files. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [POWERPC] Introduce address space "slices"Benjamin Herrenschmidt2007-05-091-44/+42Star
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The basic issue is to be able to do what hugetlbfs does but with different page sizes for some other special filesystems; more specifically, my need is: - Huge pages - SPE local store mappings using 64K pages on a 4K base page size kernel on Cell - Some special 4K segments in 64K-page kernels for mapping a dodgy type of powerpc-specific infiniband hardware that requires 4K MMU mappings for various reasons I won't explain here. The main issues are: - To maintain/keep track of the page size per "segment" (as we can only have one page size per segment on powerpc, which are 256MB divisions of the address space). - To make sure special mappings stay within their allotted "segments" (including MAP_FIXED crap) - To make sure everybody else doesn't mmap/brk/grow_stack into a "segment" that is used for a special mapping Some of the necessary mechanisms to handle that were present in the hugetlbfs code, but mostly in ways not suitable for anything else. The patch relies on some changes to the generic get_unmapped_area() that just got merged. It still hijacks hugetlb callbacks here or there as the generic code hasn't been entirely cleaned up yet but that shouldn't be a problem. So what is a slice ? Well, I re-used the mechanism used formerly by our hugetlbfs implementation which divides the address space in "meta-segments" which I called "slices". The division is done using 256MB slices below 4G, and 1T slices above. Thus the address space is divided currently into 16 "low" slices and 16 "high" slices. (Special case: high slice 0 is the area between 4G and 1T). Doing so simplifies significantly the tracking of segments and avoids having to keep track of all the 256MB segments in the address space. While I used the "concepts" of hugetlbfs, I mostly re-implemented everything in a more generic way and "ported" hugetlbfs to it. Slices can have an associated page size, which is encoded in the mmu context and used by the SLB miss handler to set the segment sizes. The hash code currently doesn't care, it has a specific check for hugepages, though I might add a mechanism to provide per-slice hash mapping functions in the future. The slice code provide a pair of "generic" get_unmapped_area() (bottomup and topdown) functions that should work with any slice size. There is some trickiness here so I would appreciate people to have a look at the implementation of these and let me know if I got something wrong. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Fix pagetable bloat for hugepagesDavid Gibson2006-04-281-0/+1
| | | | | | | | | | | | | | | | | At present, ARCH=powerpc kernels can waste considerable space in pagetables when making large hugepage mappings. Hugepage PTEs go in PMD pages, but each PMD page maps 256M and so contains only 16 hugepage PTEs (128 bytes of data), but takes up a 1024 byte allocation. With CONFIG_PPC_64K_PAGES enabled (64k base page size), the situation is worse. Now hugepage PTEs are at the PTE page level (also mapping 256M), so we store 16 hugepage PTEs in a 64k allocation. The PowerPC MMU already means that any 256M region is either all hugepage, or all normal pages. Thus, with some care, we can use a different allocation for the hugepage PTE tables and only allocate the 128 bytes necessary. Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: sanitize header files for user space includesArnd Bergmann2006-01-091-0/+2
| | | | | | | | | | | | | | | include/asm-ppc/ had #ifdef __KERNEL__ in all header files that are not meant for use by user space, include/asm-powerpc does not have this yet. This patch gets us a lot closer there. There are a few cases where I was not sure, so I left them out. I have verified that no CONFIG_* symbols are used outside of __KERNEL__ any more and that there are no obvious compile errors when including any of the headers in user space libraries. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: Replace VMALLOCBASE with VMALLOC_STARTDavid Gibson2006-01-091-10/+0Star
| | | | | | | | | | | | | | | | | | | | | | | On ppc64, we independently define VMALLOCBASE and VMALLOC_START to be the same thing: the start of the vmalloc() area at 0xd000000000000000. VMALLOC_START is used much more widely, including in generic code, so this patch gets rid of the extraneous VMALLOCBASE. This does require moving the definitions of region IDs from page_64.h to pgtable.h, but they don't clearly belong in the former rather than the latter, anyway. While we're moving them, clean up the definitions of the REGION_IDs: - Abolish REGION_SIZE, it was only used once, to define REGION_MASK anyway - Define the specific region ids in terms of the REGION_ID() macro. - Define KERNEL_REGION_ID in terms of PAGE_OFFSET rather than KERNELBASE. It amounts to the same thing, but conceptually this is about the region of the linear mapping (which starts at PAGE_OFFSET) rather than of the kernel text itself (which is at KERNELBASE). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: More hugepage boundary case fixesDavid Gibson2005-11-251-6/+11
| | | | | | | | | | | | | | | | | | | | | Blah. The patch [0] I recently sent fixing errors with in_hugepage_area() and prepare_hugepage_range() for powerpc itself has an off-by-one bug. Furthermore, the related functions touches_hugepage_*_range() and within_hugepage_*_range() are also buggy. Some of the bugs, like those addressed in [0] originated with commit 7d24f0b8a53261709938ffabe3e00f88f6498df9 where we tweaked the semantics of where hugepages are allowed. Other bugs have been there essentially forever, and are due to the undefined behaviour of '<<' with shift counts greater than the type width (LOW_ESID_MASK could return non-zero for high ranges with the right congruences). The good news is that I now have a testsuite which should pick up things like this if they creep in again. [0] "powerpc-fix-for-hugepage-areas-straddling-4gb-boundary" Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>
* [PATCH] powerpc: fix for hugepage areas straddling 4GB boundaryDavid Gibson2005-11-241-3/+3
| | | | | | | | | | | | | | | | | | | Commit 7d24f0b8a53261709938ffabe3e00f88f6498df9 fixed bugs in the ppc64 SLB miss handler with respect to hugepage handling, and in the process tweaked the semantics of the hugepage address masks in mm_context_t. Unfortunately, it left out a couple of necessary changes to go with that change. First, the in_hugepage_area() macro was not updated to match, second prepare_hugepage_range() was not updated to correctly handle hugepages regions which straddled the 4GB point. The latter appears only to cause process-hangs when attempting to map such a region, but the former can cause oopses if a get_user_pages() is triggered at the wrong point. This patch addresses both bugs. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] ppc64 need HPAGE_SHIFT when huge pages disabledAndy Whitcroft2005-11-181-0/+4
| | | | | | | | | | | | | | | | | With the new powerpc architecture we don't seem to be able to disable huge pages anymore. mm/built-in.o(.toc1+0xae0): undefined reference to `HPAGE_SHIFT' make: *** [.tmp_vmlinux1] Error 1 We seem to need to define HPAGE_SHIFT to something when HUGETLB_PAGE isn't defined. This patch defines it to PAGE_SHIFT when we have no support. Signed-off-by: Andy Whitcroft <apw@shadowen.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] powerpc: Merge page.hMichael Ellerman2005-11-141-0/+174
Merge asm-ppc/page.h and asm-ppc64/page.h into asm-powerpc/page.h, asm-powerpc/page_32.h and asm-powerpc/page_64.h Built for PPC (common_defconfig), with ARCH=powerpc, mostly built with ARCH=ppc (other things break the build). Built and booted on P5 LPAR for PPC64 with ARCH=ppc/powerpc (pseries_defconfig). Mostly built for iSeries powerpc. Signed-off-by: Michael Ellerman <michael@ellerman.id.au> Signed-off-by: Paul Mackerras <paulus@samba.org>