summaryrefslogtreecommitdiffstats
path: root/doc/setup_howto
blob: 29056829a43d6e68ff63ca8c0ec18e08abee6195 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
This is a little howto to get people started on openSLX. Please expand!


1. Prerequisites

Hard disk space: As the tm-scripts directory will expand considerably while
	building openSLX (to 5-6 GB), we recommend to allocate around 8-10 GB
	disk space. OpenSLX will install some packages into the base system 
	depending on chosen modules.

Currently supported distributions:
	Ubuntu 12.04 LTS, Ubuntu 13.04, openSuse 12.3.

Net access is vital as packages / source will be downloaded from the internet
while building. Please note that root privileges will be needed to run the
scripts, as e.g. needed packages will be installed automatically.


2. Source repositories

openSLX main repository: git://git.openslx.org/openslx-ng/tm-scripts.git

There are some other git repositories needed by the build process, but they
will be automatically checked out, e.g. busybox or printergui.


3. Server prerequisites

Needed services: dhcp, tftpd, httpd, NFS and/or dnbd3.


4. Getting the source

checkout openSLX git repository:
# git clone git://git.openslx.org/openslx-ng/tm-scripts.git

There are some other git repositories needed by the build process, but they
are automatically checked out, e.g. busybox or printergui.


5. Getting started

Change into directory tm-scripts, and execute the mltk script ('mini linux
toolkit') without parameters (or use -h, --help) to see possible options
including some examples.

mltk will check for some essential basic development tools like gcc, make 
etc. In case of missing development tools mltk will display information 
| [info]     You appear to be missing following development tools.
along with some installation hints about needed package(s) and suggests 
needed packet manager invocations to remedy.

mltk will write detailed log files to the directory 'logs' (stdout.log, 
stderr.log). Detailed information can also be obtained using the '-d' 
(debug) switch. If a kernel is being compiled '-d' will ask if some special 
kernel options arise, if no value was being given through 'make oldconfig', 
as without '-d' the system will assume the default answer is correct.

Please take note that mltk functions are divided in two parts, somewhat 
misleadingly named 'remote' and 'server' (second parameter choice). As 
rule of thumb it may be said that 'remote' applies to building and 'server' 
applies to packaging the built system in appropriate ways (initramfs, sqfs) 
for delivery.


6. Building

Build Stage31:
# ./mltk remote stage31 -c -b (-d -> debug when appropriate)

... this will take quite some time, mostly due to kernel compiling.

Build Stage32:
# ./mltk remote stage32 -c -b (-d )

Build Stage32 for openSuse:
# ./mltk remote stage32-opensuse -c -b

... this will take some time, mostly due to compiling a couple of packages.


Building a single module:
# ./mltk remote stage32 -c [module] -b [module] (-d)

Building a single module for openSuse:
./mltk remote stage32-opensuse -c [module] [module] -b (-d)


Build addons (vmware etc.)
# ./mltk remote vmware -c -b
# ./mltk remote vbox -c -b
# ./mltk remote debug -c -b (as always: -d -> debug when appropriate)


7. Packaging

When using the parameter 'server' either an IP adress or 'local' is expected. 
If the building machine is also used to deliver the built boot images 'local' 
should be used.

If another machine is used to deliver built images (by http etc.) the IP 
adress of the build machine has to be be used. In that case mltk needs to be 
present on the server machine.

Please note that the remote machine (the machine on which the build process 
runs) needs to export the build structure (option remote -n, see 
mltk --help). This option executes a bind mount of the local build directory 
to a standardized place, /export/build, which can be accessed later from the 
server machine via rsync. To facilitate this rsync-ing it may be wise to 
add the ssh key to the build machine (authorized_keys), as then no password 
has to be given when syncing from the server machine.

So, remember to execute
# ./mltk remote -n
on the build machine, as the build is usually going to by synchronized to a 
dedicated server machine.


7.1 Packaging locally (build and server machine are the same machine)

Even though the usual way to go is using dedicated machines to build and to 
serve it is possible to package locally, e.g. for testing purposes. So, to 
package stages and addons (for example vmware), presuming the same machine 
is used for building and serving:
# ./mltk server local stage31 -e stage31
# ./mltk server local stage32 -e stage32
(Use this call for openSuse:)
# ./mltk server local stage32-opensuse -e stage32 (for target opensuse)

To package addons the parameter 'addons' has to be used:
# ./mltk server local vmware -e addons
... other addons likewise.


7.2 Remote packaging (needed if build and server machine not identical)

First, do a
# ./mltk server [IP a build machine] -s

to synchronize all stage/addon builds in one pass. This synchronizes the 
complete build directories from the remote (build) machine to the server. 
It is possible to synchronize several build machines (thus different 
flavours) to one server. The IP adresses of build machines are used in 
server directory structure to distinguish builds; therefore the option 
'local' should be used with care.

The stages and addons may be packed in analogue to the 'local case'
mentioned above: 

# ./mltk server [IP of build machine] stage31 -e stage31
# ./mltk server [IP of build machine] stage32 -e stage32

Use this call for openSuse:
# ./mltk server [IP of build machine] stage32-opensuse -e stage32 

For packaging addons the parameter 'addons' should be used:
# ./mltk server [IP of build machine] vmware -e addons
... other addons likewise.

Please note that stages/addons can be synchronized independently, if 
needed:
# ./mltk server [IP of build machine] stage31 -e stage31 -s
# [...]
# ./mltk server [IP of build machine] vmware -e addons -s


8. Preparing for client boot

As example we suppose the packaged boot images are expected in 
[webroot]/boot/client. Of course the boot chain (or an (i)pxe-delivered boot 
menu) needs to be pointed to this directory as well. It is possible to use 
more than one directory when using a boot menu, by the way; different 
directories just need to be represented by separate entries in the boot menu.

The packaged stages, addons and the kernel will be found on the server machine 
at .../tm-scripts/server/boot/[IP or local]/. For convenience it is recommended 
to link these files, but they can also be copied to [webroot]/boot/client, of 
course.

So these links should be set:

initramfs-stage31
-> [path to tm-scripts]/server/boot/[IP or local]/initramfs-stage31

kernel
-> [path to tm-scripts]/server/boot/[IP or local]/kernel/kernel

stage32.sqfs
-> [path to tm-scripts]/server/boot/[IP or local]/stage32-opensuse.sqfs

vmware.sqfs
-> [path to tm-scripts]/server/boot/[IP or local]/vmware.sqfs
... other addons likewise.


9. Client configurations

Two configuration files will be needed in the directory (following example 
above): [webroot]/boot/client, both config and config.tgz.

The config file will be used for client boot parameters, eg. which NFS share 
will be used for storing VM images, proxy configurations, which addons are to 
be used and the like. Please take note that the client machine root password 
will be defined here. These parameters are documented in doc/boot_config_vars.

The file config.tgz holds localization information for specific environments, 
e.g. specific local authentification, home directories, shares and the like. 
If there is no pre-formatted localization available it's perhaps a good 
idea to just touch config.tgz or pack an empty archive of that name. Example 
localizations may be listed at [path to tm-scripts]/server/configs.


10. iPXE: TODO

This will be most probably not being required in the future, as relevant 
features are available in recent Syslinux versions (6.0.0+).


11. Stage 4 (extract Linux desktop environment)

The script clone_stage4 should not be used on a machine fulfilling only the 
building part; it has to be executed on a machine with server function (which
may be, of course, the same machine used for building). Anyway, to avoid
further confusion about the modes 'remote' and 'server' and possible dysfunction 
the script clone_stage4 rejects 'local' as IP parameter.

To use Stage 4 a nfs export will be necessary, as later on the files of stage4 
will be accessed client-side by nfs. Please keep in mind that 
"./mltk remote -n" has to be executed on the build machine before cloning 
Stage 4.

Then, be sure all builds are synced to the server machine, if that has not 
happened before:
# ./mltk server [IP of build machine] -s

... or, if wanted, sync just some parts, if you know what you're doing. 
Stage31 makes most sense so far, so to say. So, for example:
# ./mltk server [IP] stage31 -e stage31 -s
etc.

Well, then do the cloning work:
# ./scripts/clone_stage4 [IP of build machine] stage32 
	/path/to/your/nfs/share/stage4	(this is one line!)

To use Stage 4 the clients need the nfs mount information. This is handled via 
a configuration variable (please consult doc/boot_config_vars for a full 
summary) called SLX_STAGE4_NFS.

So now would be a good time to check (or re-check) your base config file in 
the client directory you chose above (see 8. Preparing for client boot) 
contains a line
SLX_STAGE4_NFS=[IP of service computer]:/path/to/your/nfs/share/stage4

You should see Stage 4 working after rebooting the client. The Stage 4 entries 
should be placed above the list of virtual machines. 

As a side note it stage4 should be usable without complete reboot, if a 
re-login is done and
# systemctl restart nfs-mount
is being executed.