summaryrefslogtreecommitdiffstats
path: root/doc
diff options
context:
space:
mode:
authorSimon Rettberg2014-01-02 19:36:42 +0100
committerSimon Rettberg2014-01-02 19:36:42 +0100
commit61c9b1c97b1f5d07183987c2256637e523d1ff17 (patch)
tree5c5fa89bb09873edceae121c2222948083636972 /doc
parent<setup_target> Add check for changed .build/.conf file of modules, autoclean ... (diff)
downloadtm-scripts-61c9b1c97b1f5d07183987c2256637e523d1ff17.tar.gz
tm-scripts-61c9b1c97b1f5d07183987c2256637e523d1ff17.tar.xz
tm-scripts-61c9b1c97b1f5d07183987c2256637e523d1ff17.zip
!! Split up 'mltk' into 'mltk' and 'openslx' !!
'mltk remote' is now 'mltk' 'mltk server' is now 'openslx' Also changed the export type (-e) stage31 to 'cpio' and stage32 and addons to 'sqfs' It should describe what it's packed as, not what the meaning of the content is; you can already tell from the file name.
Diffstat (limited to 'doc')
-rw-r--r--doc/setup_howto137
1 files changed, 62 insertions, 75 deletions
diff --git a/doc/setup_howto b/doc/setup_howto
index 9ed7f1d1..e520973e 100644
--- a/doc/setup_howto
+++ b/doc/setup_howto
@@ -2,11 +2,11 @@ This is a little howto to get people started on openSLX. Please expand!
-1. Prerequisites
+1. Client (template) prerequisites
Hard disk space: As the tm-scripts directory will expand considerably while
- building openSLX (to 5-6 GB), we recommend to allocate around 8-10 GB
- disk space. OpenSLX will install some packages into the base system
+ building mini-linux (to 5-6 GB), we recommend to allocate around 8-10 GB
+ disk space. mltk will install some packages into the base system
depending on chosen modules.
Currently supported distributions:
@@ -25,7 +25,7 @@ There are some other git repositories needed by the build process, but they
will be automatically checked out, e.g. busybox or printergui.
-3. Server prerequisites
+3. Deployment server prerequisites
Needed services: dhcp, tftpd, httpd, NFS and/or dnbd3.
@@ -35,12 +35,10 @@ Needed services: dhcp, tftpd, httpd, NFS and/or dnbd3.
checkout openSLX git repository:
# git clone git://git.openslx.org/openslx-ng/tm-scripts.git
-There are some other git repositories needed by the build process, but they
-are automatically checked out, e.g. busybox or printergui.
-
5. Getting started
+On your client machine that serves as the template for the final system:
Change into directory tm-scripts, and execute the mltk script ('mini linux
toolkit') without parameters (or use -h, --help) to see possible options
including some examples.
@@ -57,112 +55,101 @@ stderr.log). Detailed information can also be obtained using the '-d'
kernel options arise, if no value was being given through 'make oldconfig',
as without '-d' the system will assume the default answer is correct.
-Please take note that mltk functions are divided in two parts, somewhat
-misleadingly named 'remote' and 'server' (second parameter choice). As
-rule of thumb it may be said that 'remote' applies to building and 'server'
-applies to packaging the built system in appropriate ways (initramfs, sqfs)
-for delivery.
-
-
6. Building
Build Stage31:
-# ./mltk remote stage31 -c -b (-d -> debug when appropriate)
+# ./mltk stage31 -c -b (-d -> debug when appropriate)
-... this will take quite some time, mostly due to kernel compiling.
+... this will take quite some time, the first time mostly due to kernel compiling.
Build Stage32:
-# ./mltk remote stage32 -c -b (-d )
+# ./mltk stage32 -c -b (-d )
-Build Stage32 for openSuse:
-# ./mltk remote stage32-opensuse -c -b
+Build Stage32 for openSuse: (not really needed, should be identical to stage32)
+# ./mltk stage32-opensuse -c -b
... this will take some time, mostly due to compiling a couple of packages.
-Building a single module:
-# ./mltk remote stage32 -c [module] -b [module] (-d)
+(Re)building a single module:
+# ./mltk stage32 -c [module] -b [module] (-d)
Building a single module for openSuse:
-./mltk remote stage32-opensuse -c [module] [module] -b (-d)
+./mltk stage32-opensuse -c [module] [module] -b (-d)
Build addons (vmware etc.)
-# ./mltk remote vmware -c -b
-# ./mltk remote vbox -c -b
-# ./mltk remote debug -c -b (as always: -d -> debug when appropriate)
+# ./mltk vmware -c -b
+# ./mltk vbox -c -b
+# ./mltk debug -c -b (as always: -d -> debug when appropriate)
7. Packaging
-When using the parameter 'server' either an IP adress or 'local' is expected.
-If the building machine is also used to deliver the built boot images 'local'
-should be used.
-
-If another machine is used to deliver built images (by http etc.) the IP
-adress of the build machine has to be be used. In that case mltk needs to be
-present on the server machine.
-
-Please note that the remote machine (the machine on which the build process
-runs) needs to export the build structure (option remote -n, see
-mltk --help). This option executes a bind mount of the local build directory
-to a standardized place, /export/build, which can be accessed later from the
-server machine via rsync. To facilitate this rsync-ing it may be wise to
-add the ssh key to the build machine (authorized_keys), as then no password
+This should be done on the 'packaging server' which creates the files
+required for booting from the remote template machine from above.
+You can do this on the same machine you were building mini-linux on,
+but it might lead to problems when builsing stage 4 later.
+For this you need the openslx script from the tm-scripts repo, which
+also needs to be run as root (for proper rsync).
+
+Please note that the remote machine (the machine on which the build process
+ran) needs to export the build structure (option -n, see
+mltk --help). This option executes a bind mount of the local build directory
+to a standardized place, /export/build, which can be accessed later from the
+server machine via rsync. To facilitate this rsync-ing it may be wise to
+add the ssh key to the build machine (authorized_keys), as then no password
has to be given when syncing from the server machine.
So, remember to execute
-# ./mltk remote -n
-on the build machine, as the build is usually going to by synchronized to a
-dedicated server machine.
+# ./mltk -n
+on the build machine once after bootup, as the build is usually going to
+by synchronized to a dedicated server machine for packing.
7.1 Packaging locally (build and server machine are the same machine)
-Even though the usual way to go is using dedicated machines to build and to
-serve it is possible to package locally, e.g. for testing purposes. So, to
-package stages and addons (for example vmware), presuming the same machine
+Even though the usual way to go is using dedicated machines to build and to
+serve it is possible to package locally, e.g. for testing purposes. So, to
+package stages and addons (for example vmware), presuming the same machine
is used for building and serving:
-# ./mltk server local stage31 -e stage31
-# ./mltk server local stage32 -e stage32
+# ./openslx local stage31 -e cpio
+# ./openslx local stage32 -e sqfs
(Use this call for openSuse:)
-# ./mltk server local stage32-opensuse -e stage32 (for target opensuse)
+# ./openslx local stage32-opensuse -e sqfs (for target opensuse)
-To package addons the parameter 'addons' has to be used:
-# ./mltk server local vmware -e addons
-... other addons likewise.
+Addons can be packed the same way:
+# ./openslx local vmware -e sqfs
7.2 Remote packaging (needed if build and server machine not identical)
First, do a
-# ./mltk server [IP a build machine] -s
+# ./openslx <IP of build machine> -s
-to synchronize all stage/addon builds in one pass. This synchronizes the
-complete build directories from the remote (build) machine to the server.
-It is possible to synchronize several build machines (thus different
-flavours) to one server. The IP adresses of build machines are used in
-server directory structure to distinguish builds; therefore the option
+to synchronize all stage/addon builds in one pass. This synchronizes the
+complete build directories from the remote (build) machine to the server.
+It is possible to synchronize several build machines (thus different
+flavours) to one server. The IP adresses of build machines are used in
+server directory structure to distinguish builds; therefore the option
'local' should be used with care.
The stages and addons may be packed in analogue to the 'local case'
-mentioned above:
+mentioned above:
-# ./mltk server [IP of build machine] stage31 -e stage31
-# ./mltk server [IP of build machine] stage32 -e stage32
+# ./openslx <IP of build machine> stage31 -e cpio
+# ./openslx <IP of build machine> stage32 -e sqfs
Use this call for openSuse:
-# ./mltk server [IP of build machine] stage32-opensuse -e stage32
+# ./openslx <IP of build machine> stage32-opensuse -e sqfs
-For packaging addons the parameter 'addons' should be used:
-# ./mltk server [IP of build machine] vmware -e addons
-... other addons likewise.
+Addons:
+# ./openslx <IP of build machine> vmware -e sqfs
-Please note that stages/addons can be synchronized independently, if
-needed:
-# ./mltk server [IP of build machine] stage31 -e stage31 -s
+You can synchronize and pack at the same time
+# ./openslx <IP of build machine> stage31 -e cpio -s
# [...]
-# ./mltk server [IP of build machine] vmware -e addons -s
+# ./openslx <IP of build machine> vmware -e sqfs -s
8. Preparing for client boot
@@ -227,30 +214,30 @@ the script clone_stage4 rejects 'local' as IP parameter.
To use Stage 4 a nfs export will be necessary, as later on the files of stage4
will be accessed client-side by nfs. Please keep in mind that
-"./mltk remote -n" has to be executed on the build machine before cloning
+"./mltk -n" has to be executed on the build machine before cloning
Stage 4.
Then, be sure all builds are synced to the server machine, if that has not
happened before:
-# ./mltk server [IP of build machine] -s
+# ./openslx <IP of build machine> -s
... or, if wanted, sync just some parts, if you know what you're doing.
Stage31 makes most sense so far, so to say. So, for example:
-# ./mltk server [IP] stage31 -e stage31 -s
+# ./openslx <IP> stage31 -e cpio -s
etc.
Well, then do the cloning work:
-# ./scripts/clone_stage4 [IP of build machine] stage32
- /path/to/your/nfs/share/stage4 (this is one line!)
+# ./scripts/clone_stage4 [IP of build machine] stage32 \
+ /path/to/your/nfs/share/stage4 # (this is one line!)
To use Stage 4 the clients need the nfs mount information. This is handled via
a configuration variable (please consult doc/boot_config_vars for a full
-summary) called SLX_STAGE4_NFS.
+summary) called SLX_STAGE4.
So now would be a good time to check (or re-check) your base config file in
the client directory you chose above (see 8. Preparing for client boot)
contains a line
-SLX_STAGE4_NFS=[IP of service computer]:/path/to/your/nfs/share/stage4
+SLX_STAGE4=[IP of service computer]:/path/to/your/nfs/share/stage4
You should see Stage 4 working after rebooting the client. The Stage 4 entries
should be placed above the list of virtual machines.