summaryrefslogtreecommitdiffstats
path: root/doc/setup_howto
diff options
context:
space:
mode:
authorChristian Rößler2013-11-22 21:57:00 +0100
committerChristian Rößler2013-11-22 21:57:00 +0100
commita054996247f12cdcb034988560acb28203105d06 (patch)
tree77f0ccdb8c9994934081e2830d202761be09b7a3 /doc/setup_howto
parentMerge branch 'master' of dnbd3:openslx-ng/tm-scripts (diff)
downloadtm-scripts-a054996247f12cdcb034988560acb28203105d06.tar.gz
tm-scripts-a054996247f12cdcb034988560acb28203105d06.tar.xz
tm-scripts-a054996247f12cdcb034988560acb28203105d06.zip
[doc] setup_howto: Edited some more.wq
Diffstat (limited to 'doc/setup_howto')
-rw-r--r--doc/setup_howto120
1 files changed, 68 insertions, 52 deletions
diff --git a/doc/setup_howto b/doc/setup_howto
index 69b85937..03f17942 100644
--- a/doc/setup_howto
+++ b/doc/setup_howto
@@ -24,7 +24,7 @@ printerGUI repository: git://git.openslx.org/openslx-ng/printergui.git
3. Server prerequisites
-Needed serices: dhcp, tftpd, httpd, NFS and/or dnbd3.
+Needed services: dhcp, tftpd, httpd, NFS and/or dnbd3.
4. Getting the source
@@ -75,22 +75,22 @@ Build addons (vmware etc.)
# ./mltk remote debug -c -b (as always: -d -> debug when appropriate)
-7. 'Packaging'
+7. Packaging
When using the parameter 'server' either an IP adress or 'local' is expected.
If the building machine is also used to deliver the built boot images 'local'
should be used.
-If another machine is used to deliver the built images (by http etc.) the IP
-adress of the build machine shoud be used. In that case mltk needs to be
+If another machine is used to deliver built images (by http etc.) the IP
+adress of the build machine has to be be used. In that case mltk needs to be
present on the server machine.
-Please take note that the 'remote' machine (the machine on which the build
-process runs) needs to export the build structure (option remote -n, see
+Please note that the remote machine (the machine on which the build process
+runs) needs to export the build structure (option remote -n, see
mltk --help). This option executes a bind mount of the local build directory
to a standardized place, /export/build, which can be accessed later from the
server machine via rsync. To facilitate this rsync-ing it way be wise to
-add the ssh key to authorized_keys on the build machine, as then no password
+add the ssh key to the build machine (authorized_keys), as then no password
has to be given when syncing from the server machine.
So, remember to execute
@@ -99,33 +99,38 @@ on the build machine, if the build is going to by synchronized to a dedicated
server machine.
-7.1 Packaging locally (build and server machine are the same)
+7.1 Packaging locally (build and server machine are the same machine)
-For 'packaging' the stages and an addon (vmware), presuming the same machine
+To package stages and addons (for example vmware), presuming the same machine
is used for building and serving:
# ./mltk server local stage31 -e stage31
# ./mltk server local stage32 -e stage32
(Use this call for openSuse:)
# ./mltk server local stage32-opensuse -e stage32 (for target opensuse)
-For packaging addons the parameter 'addons' should be used:
-# ./mltk server local vmware -e addons (... other addons likewise.)
+To package addons the parameter 'addons' has to be used:
+# ./mltk server local vmware -e addons
+... other addons likewise.
-7.2 Remote packaging (build and server machine not are the same)
+7.2 Remote packaging (needed if build and server machine not identical)
First, do a
-# ./mltk server [IP of build machine] -s
+# ./mltk server [IP a build machine] -s
to synchronize all stage/addon builds in one pass. This synchronizes the
-complete build directories from the remote (build) machine to the server.
+complete build directories from the remote (build) machine to the server.
+It is possible to synchronize several build machines (thus different
+flavours) to one server. The IP adresses of build machines are used in
+server directory structure to distinguish builds; therefore the option
+'local' should be used with care.
-Then you may package the stages and addons in analogue to tho local case
-mentioned above:
+The stages and addons may be packed in analogue to the 'local case'
+mentioned above:
# ./mltk server [IP of build machine] stage31 -e stage31
# ./mltk server [IP of build machine] stage32 -e stage32
-(Use this call for openSuse instead:)
+Use this call for openSuse:
# ./mltk server [IP of build machine] stage32-opensuse -e stage32
For packaging addons the parameter 'addons' should be used:
@@ -142,19 +147,27 @@ needed:
8. Preparing for client boot
As example we suppose the packaged boot images are expected in
-[webroot]/boot/clients; of course the dhcp boot chain needs to be pointed
-to this directory also. The 'packaged' stages, addons and the kernel will
-be found on the server machine at ../tm-scripts/server/boot/[IP or local]/.
-It is recommended for convenience to link to these files, but they can also
-be copied to [webroot]/boot/clients, of course.
+[webroot]/boot/client. Of course the boot chain (or an (i)pxe-delivered boot
+menu) needs to be pointed to this directory as well. It is possible to use
+more than one directory when using a boot menu, by the way; different
+directories just need to be represented by separate entries in the boot menu.
+
+The packaged stages, addons and the kernel will be found on the server machine
+at .../tm-scripts/server/boot/[IP or local]/. For convenience it is recommended
+to link these files, but they can also be copied to [webroot]/boot/client, of
+course.
So these links should be set:
+
initramfs-stage31
-> [path to tm-scripts]/server/boot/[IP or local]/initramfs-stage31
+
kernel
-> [path to tm-scripts]/server/boot/[IP or local]/kernel/kernel
+
stage32.sqfs
-> [path to tm-scripts]/server/boot/[IP or local]/stage32-opensuse.sqfs
+
vmware.sqfs
-> [path to tm-scripts]/server/boot/[IP or local]/vmware.sqfs
... other addons likewise.
@@ -162,61 +175,64 @@ vmware.sqfs
9. Client configurations
-Two configuration files will be needed in (following the example above)
-[webroot]/boot/clients: config and config.tgz.
+Two configuration files will be needed in the directory (following example
+above): [webroot]/boot/client, both config and config.tgz.
-The file config will be used for client boot parameters, eg. which NFS share
-will be used for storing VM images, proxy configurations, which addons are to
-be used and the like. Please take note that the client machine root password
-will be defined here. These parameters are documented in boot_config_vars.
+The config file will be used for client boot parameters, eg. which NFS share
+will be used for storing VM images, proxy configurations, which addons are to
+be used and the like. Please take note that the client machine root password
+will be defined here. These parameters are documented in doc/boot_config_vars.
-The file config.tgz holds localization information for specific environments,
-e.g. university specific authetification, home directories, shares and the
-like. If there is no pre-formatted localization available it's perhaps a good
-idea to just touch config.tgz or pack an empty archive of that name. These
+The file config.tgz holds localization information for specific environments,
+e.g. specific local authentification, home directories, shares and the like.
+If there is no pre-formatted localization available it's perhaps a good
+idea to just touch config.tgz or pack an empty archive of that name. Example
localizations may be listed at [path to tm-scripts]/server/configs.
10. iPXE: TODO
-
11. Stage 4 (extract Linux desktop environment)
-The script clone_stage4 should not be used on the build computer if build and
-server machine are different. It has to be used on the computer which
-provides httpd and nfsd, or, in other words: On the computer which serves
-the boot files. That is the reason why clone_stage4 rejects 'local' as IP.
+The script clone_stage4 should not be used on a machine fulfilling only the
+building part; it has to be executed on a machine with server function (which
+may be, of course, the same machine used for building). Anyway, to avoid
+further confusion about the modes 'remote' and 'server' and possible dysfunction
+the script clone_stage4 rejects 'local' as IP parameter.
-To use stage4 a nfs export will be necessary, as the files of stage4 will
-be accessed by nfs client-side later on. Please keep in mind that
+To use Stage 4 a nfs export will be necessary, as later on the files of stage4
+will be accessed client-side by nfs. Please keep in mind that
"./mltk remote -n" has to be executed on the build machine before cloning
-stage4.
+Stage 4.
-Then make sure all builds are synced to the server machine, if that has not
+Then, be sure all builds are synced to the server machine, if that has not
happened before:
# ./mltk server [IP of build machine] -s
... or, if wanted, sync just some parts, if you know what you're doing.
-stage32 makes most sense so far, so to say. So, for example:
-# ./mltk server [IP] stage32 -e stage32 -s
+stage31 makes most sense so far, so to say. So, for example:
+# ./mltk server [IP] stage31 -e stage31 -s
etc.
Well, then do the cloning work:
# ./scripts/clone_stage4 [IP of build machine] stage32
/path/to/your/nfs/share/stage4 (this is one line!)
-To use stage4 the clients need to be given the nfs mount information. This is
-handled via a configuration variable (please consult doc/boot_config_vars
-for a full summary) called SLX_STAGE4_NFS.
+To use Stage 4 the clients need the nfs mount information. This is handled via
+a configuration variable (please consult doc/boot_config_vars for a full
+summary) called SLX_STAGE4_NFS.
+
+So now would be a good time to check (or re-check) your base config file in
+the client directory you chose above (see 8. Preparing for client boot)
+contains a line
+SLX_STAGE4_NFS=[IP of service computer]:/path/to/your/nfs/share/stage4
-So please check (or re-check) your base config file in the directory you
-chose above (see 8. Preparing for client boot) contains a line
-SLX_STAGE4_NFS=[IP of service computer]:/path/to/your/nfs/share/stage4.
+You should see Stage 4 working after rebooting the client. The Stage 4 entries
+should be placed above the list of virtual machines.
-You should see stage4 working after rebooting the client if there are some
-entries besides Openbox above possibly a list of virtual machines. As a side
-note it should be possible to see the stage4 without complete reboot, if
+As a side note it stage4 should be usable without complete reboot, if a
+re-login is done and
# systemctl restart nfs-mount
is being executed.