[ previous ] [ Contents ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ A ] [ next ]
See the LDP BootPrompt-HOWTO
for detailed information on the boot prompt.
It is possible to boot a system and log on to the root account without knowing
the root password as long as one has access to the console keyboard. (This
assumes there are no password requests from the BIOS or from a boot loader such
as lilo
that would prevent one from booting the system.)
This is a procedure which requires no external boot disks and no change in BIOS boot settings. Here, "Linux" is the label for booting the Linux kernel in the default Debian install.
At the lilo
boot screen, as soon as boot: appears
(you must press a shift key at this point on some systems to prevent automatic
booting and when lilo
uses the framebuffer you have to press TAB
to see the options you type), enter:
boot: Linux init=/bin/sh
This causes the system to boot the kernel and run /bin/sh
instead
of its standard init
. Now you have gained root privileges and a
root shell. Since /
is currently mounted read-only and many disk
partitions have not been mounted yet, you must do the following to have a
reasonably functioning system.
init-2.03# mount -n -o remount,rw / init-2.03# mount -avt nonfs,noproc,nosmbfs init-2.03# cd /etc init-2.03# vi passwd init-2.03# vi shadow
(If the second data field in /etc/passwd
is "x" for
every username, your system uses shadow passwords, and you must edit
/etc/shadow
.) To disable the root password, edit the second data
field in the password file so that it is empty. Now the system can be rebooted
and you can log on as root without a password. When booting into runlevel 1,
Debian (at least after Potato) requires a password, which some older
distributions did not.
It is a good idea to have a minimal editor in /bin/
in case
/usr/
is not accessible (see Rescue editors, Section 11.2).
Also consider installing the sash
package. When the system
becomes unbootable, execute:
boot: Linux init=/bin/sash
sash
serves as an interactive substitute for sh
even
when /bin/sh
is unusable. It's statically linked, and includes
many standard utilities as built-ins (type "help" at the prompt for a
reference list).
Boot from any emergency boot/root disk set. If
/dev/hda3
is the original root partition, the following
will let one edit the password file just as easily as the above.
# mkdir fixit # mount /dev/hda3 fixit # cd fixit/etc # vi shadow # vi passwd
The advantage of this approach over the previous method is one does not need to
know the lilo
password (if any). But to use it one must be able
to access the BIOS setup to allow the system to boot from floppy disk or CD, if
that is not already set.
No problem, even if you didn't bother to make a boot disk during install. If
lilo
is broken, grab the boot disk from the Debian installation
set and boot your system from it. At the boot prompt, assuming the root
partition of your Linux installation is on /dev/hda12
and you want runlevel 3, enter:
boot: rescue root=/dev/hda12 3
Then you are booted into an almost fully functional system using the kernel on the floppy. (There may be minor glitches due to lack of kernel features or modules.)
See also Install a package into an unbootable system, Section 6.3.6 if you have a broken system.
If you need a custom boot floppy, follow readme.txt
on the rescue
disk.
Chasing unstable/sid is fun, but buggy xdm
,
gdm
, kdm
, or wdm
started during the boot
process can bite you bad.
First get the root shell by entering the following at the boot prompt:
boot: Linux vga=normal s
Here, Linux is the label for the kernel image you are booting;
"vga=normal" will make sure lilo
runs in normal VGA
screen, and "s" (or "S") is the parameter passed to
init
to invoke single-user mode. Enter the root password at the
prompt.
There are few ways to disable all the X starting daemons:
run update-rc.d -f ?dm remove ; update-rc.d ?dm stop 99 1 2 3 4 5 6 .
insert "exit 0" at the start of all
/etc/init.d/?dm
files.
rename all /etc/rc2.d/S99?dm
files to
/etc/rc2.d/K99?dm
.
remove all /etc/rc2.d/S99?dm
files.
run :>/etc/X11/default-display-manager
Here, number in rc2.d
must correspond to the runlevel
specified in the /etc/inittab
. Also ?dm
means that you need to run the command multiple times by substituting it with
all of the xdm
, gdm
, kdm
, and
wdm
.
Only the first one in the list is "the one true way" in Debian. The
last one is easy but only works on Debian and requires you to set the display
manager again later using dpkg-reconfigure
. Others are generic
methods to disable daemons.
You can still start X with the startx
command from any console
shell.
The system can be booted into a particular runlevel and configuration using the
lilo
boot prompt. Details are given in the BootPrompt-HOWTO
(LDP).
If you want to boot the system into runlevel 4, use the following input at the
lilo
boot prompt.
boot: Linux 4
If you want to boot the system into normally functioning single-user mode and
you know the root password, one of the following examples at the
lilo
boot prompt will work.
boot: Linux S boot: Linux 1 boot: Linux -s
If you want to boot the system with less memory than system actually has (say
48MB for a system with 64MB), use this input at the lilo
boot
prompt:
boot: Linux mem=48M
Make sure not to specify more than the actual memory size here, otherwise the
kernel will crash. If one has more than 64MB of memory, e.g. 128MB, unless
one executes mem=128M at the boot prompt or includes a similar
append line in /etc/lilo.conf
, old kernels and/or a motherboard
with an old BIOS will not use memory beyond 64MB.
GRUB is a new boot manager from the GNU Hurd project and is much more flexible than Lilo but has slightly different handling of boot parameters.
grub> find /vmlinuz grub> root (hd0,0) grub> kernel /vmlinuz root=/dev/hda1 grub> initrd /initrd grub> boot
Here, you must be aware of the Hurd device names:
the Hurd/GRUB Linux MS-DOS/Windows (fd0) /dev/fd0 A: (hd0,0) /dev/hda1 C: (usually) (hd0,3) /dev/hda4 F: (usually) (hd1,3) /dev/hdb4 ?
See /usr/share/doc/grub/README.Debian.gz
and
/usr/share/doc/grub-doc/html/
for details.
System administration involves much more elaborate tasks in a Unix environment than in an ordinary personal computer environment. Make sure to know the most basic means of configuration in case you need to recover from system trouble. X11-based GUI configuration tools look nice and convenient but are often unsuitable in these emergency situations.
In this context, recording shell activities is a good practice, especially as root.
Emacs: Use M-x shell to start recording into a buffer, and use C-x C-w to write the buffer to a file.
Shell: Use the screen
command with "^A H" as described
in Console switching with screen
, Section
8.6.28; or use the script
command.
$ script Script started, file is typescript ... do whatever ... Ctrl-D $ col -bx <typescript >savefile $ vi savefile
The following can be used instead of script
:
$ bash -i 2>&1 | tee typescript
If you need to record the graphic image of an X application, including an
xterm
display, use gimp
(GUI). It can capture each
window or the whole screen. Alternatives are xwd
(xbase-clients
), import
(imagemagick
),
and scrot
(scrot
).
These copy and archive commands provide basics for the backup of the system and
the data. An example of simple backup script is provided as
backup
in the example
scripts
.
If you need to rearrange file structure, move content including file links by:
Standard method: # cp -a /source/directory /dest/directory # requires GNU cp # (cd /source/directory && tar cf - . ) | \ (cd /dest/directory && tar xvfp - ) If a hard link is involved, a pedantic method is needed: # cd /path/to/old/directory # find . -depth -print0 | afio -p -xv -0a /mount/point/of/new/directory If remote: # (cd /source/directory && tar cf - . ) | \ ssh [email protected] (cd /dest/directory && tar xvfp - ) If there are no linked files: # scp -pr [email protected]:/source/directory \ [email protected]:/dest/directory
The following comparative information on copying a whole subdirectory was
presented by Manoj Srivastava [email protected]
to
[email protected].
cp
Traditionally, cp
was not really a candidate for this task since
it did not dereference symbolic links, or preserve hard links either. Another
thing to consider was sparse files (files with holes).
GNU cp
has overcome these limitations; however, on a non-GNU
system, cp
could still have problems. Also, you can't generate
small, portable archives using cp
.
% cp -a . newdir
tar
Tar overcame some of the problems that cp
had with symbolic links.
However, although cpio
handles special files, traditional
tar
doesn't.
tar
's way of handling multiple hard links to a file places only
one copy of the link on the tape, but the name attached to that copy is the
only one you can use to retrieve the file; cpio
's way
puts one copy for every link, but you can retrieve it using any of the names.
The tar
command changed its option for .bz2
files
between Potato and Woody, so use --bzip2 in scripts instead of its
short form -I (Potato) or -j (Woody).
pax
The new, POSIX (IEEE Std 1003.2-1992, pages 380–388 (section 4.48) and
pages 936–940 (section E.4.48)), all-singing, all-dancing, Portable
Archive Interchange utility. pax
will read, write, and list the
members of an archive file, and will copy directory hierarchies.
pax
operation is independent of the specific archive format, and
supports a wide variety of different archive formats.
pax
implementations are still new and wet behind the ears.
# apt-get install pax $ pax -rw -p e . newdir or $ find . -depth | pax -rw -p e newdir
cpio
cpio
copies files into or out of a cpio
or
tar
archive. The archive can be another file on the disk, a
magnetic tape, or a pipe.
$ find . -depth -print0 | cpio --null --sparse -pvd new-dir
afio
afio
is a better way of dealing with cpio
-format
archives. It is generally faster than cpio
, provides more diverse
magnetic tape options and deals somewhat gracefully with input data corruption.
It supports multivolume archives during interactive operation.
afio
can make compressed archives that are much safer than
compressed tar
or cpio
archives. afio
is best used as an "archive engine" in a backup script.
$ find . -depth -print0 | afio -px -0a new-dir
All my backups onto tape use afio
.
Differential backup and data synchronization can be implemented with several methods:
rcs
: backup and history, text-only
rdiff-backup
: backup and history. symlink OK.
pdumpfs
: backup and history within a filesystem. symlink OK
rsync
: 1-way synchronization
unison
: 2-way synchronization
cvs
: multi-way synchronization with server backup and history,
text-only, mature. See Concurrent Versions
System (CVS), Section 12.1.
arch
: multi-way synchronization with server backup and history, no
such thing as a "working directory".
subversion
: multi-way synchronization with server backup and
history, Apache.
Combination of one of these with the archiving method described in Copy and archive a whole subdirectory, Section 8.3 and
the automated regular job described in Schedule activity
(cron
, at
), Section 8.6.27 will make a nice
backup system.
I will explain three easy-to-use utilities.
rdiff-backup
offers nice and simple backup with differential
history for any types of files, including symlinks. To back up most of
~/
to /mnt/backup
:
$ rdiff-backup --include ~/tmp/keep --exclude ~/tmp ~/ /mnt/backup
To restore three-day-old data from this archive to ~/old
:
$ rdiff-backup -r 3D /mnt/backup ~/old
See rdiff-backup(1)
.
pdumpfs
pdumpfs
is a simple daily backup system similar to Plan9's
dumpfs
which preserves every daily snapshot. You can access the
past snapshots at any time for retrieving a certain day's file. Let's backup
your home directory with pdumpfs
and cron
!
pdumpfs
constructs the snapshot YYYY/MM/DD in the
destination directory. All source files are copied to the snapshot directory
when pdumpfs
is run for the first time. On and after the second
time, pdumpfs
copies only updated or newly created files and
stores unchanged files as hard links to the files of the previous day's
snapshot in order to save disk space.
$ pdumpfs src-dir dest-dir [dest-basename]
See pdumpfs(8)
.
Changetrack
will record changes to the text-based configuration
files in RCS archives regularly. See changetrack(1)
.
# apt-get install changetrack # vi changetrack.conf
Run top
to see what process is acting funny. Press `P' to sort by
CPU usage, `M' to sort by memory, and `k' to kill a process. Alternatively,
BSD-style ps aux | less or System-V-style ps -efH |
less may be used. The System-V-style syntax displays parent process IDs
(PPID) which can be used for killing zombie (defunct) children.
Use kill
to kill (or send a signal to) a process by process ID,
killall
to do the same by process command name. Frequently used
signals:
1: HUP, restart daemon 15: TERM, normal kill 9: KILL, kill hard
Insurance against system malfunction is provided by the kernel compile option "Magic SysRq key". Pressing Alt-SysRq on an i386, followed by one of the keys r 0 k e i s u b, does the magic.
Un`r'aw restores the keyboard after things like X crashes. Changing the
console loglevel to `0' reduces error messages. sa`k' (system attention key)
kills all processes on the current virtual console. t`e'rminate kills all
processes on the current terminal except init
. k`i'll kills all
processes except init
.
`S'ync, `u'mount, and re`b'oot are for getting out of really bad situations.
Detailed information is in
/usr/share/doc/kernel-doc-version/Documentation/sysrq.txt.gz
or /usr/src/kernel-version/Documentation/sysrq.txt.gz
.
less
is the default pager (file content browser). Hit `h' for
help. It can do much more than more
. less
can be
supercharged by executing eval $(lesspipe) or eval
$(lessfile) in the shell startup script. See more in
/usr/share/doc/less/LESSOPEN
. The -R option allows
raw character output and enables ANSI color escape sequences. See
less(1)
.
w3m
may be a useful alternative pager for some code systems (EUC).
free
and top
give good information on memory
resources. Do not worry about the size of "used" in the
"Mem:" line, but read the one under it (38792 in the example below).
$ free -k # for 256MB machine total used free shared buffers cached Mem: 257136 230456 26680 45736 116136 75528 -/+ buffers/cache: 38792 218344 Swap: 264996 0 264996
The exact amount of physical memory can be confirmed by grep '^Memory' /var/log/dmesg, which in this case gives "Memory: 256984k/262144k available (1652k kernel code, 412k reserved, 2944k data, 152k init)".
Total = 262144k = 256M (1k=1024, 1M=1024k) Free to dmesg = 256984k = Total - kernel - reserved - data - init Free to shell = 257136k = Total - kernel - reserved - data
About 5MB is not usable by the system because the kernel uses it.
# date MMDDhhmmCCYY # hwclock --utc --systohc # hwclock --show
This will set system and hardware time to MM/DD hh:mm, CCYY. Times are displayed in local time but hardware time uses UTC.
If the hardware (BIOS) time is set to GMT, change the setting to
UTC=yes in the /etc/default/rcS
.
Reference: Managing
Accurate Date and Time HOWTO
.
Set system clock to the correct time automatically via a remote server:
# ntpdate server
This is good to have in /etc/cron.daily/
if your system has a
permanent Internet connection.
Use the chrony
package.
For disabling the screensaver, use following commands.
In the Linux console:
# setterm -powersave off
Start the kon2 (kanji) console with:
# kon -SaveTime 0
While running X:
# xset s off or # xset -dpms or # xscreensaver-command -prefs
Read the corresponding manpages for controlling other console features. See
also stty(1)
for changing and printing terminal line settings.
Glibc offers getent(1)
for searching entries from administrative
databases, i.e., passwd, group, hosts, services, protocols, or networks.
getent database [key ...]
One can always unplug the PC speaker. ;-) For the Bash shell:
echo "set bell-style none">> ~/.inputrc
In order to quiet on-screen error messages, the first place to check is
/etc/init.d/klogd
. Set KLOGD="-c
3" in this script and run /etc/init.d/klogd
restart. An alternative method is to run dmesg
-n3.
Here error levels mean:
0: KERN_EMERG, system is unusable
1: KERN_ALERT, action must be taken immediately
2: KERN_CRIT, critical conditions
3: KERN_ERR, error conditions
4: KERN_WARNING, warning conditions
5: KERN_NOTICE, normal but significant condition
6: KERN_INFO, informational
7: KERN_DEBUG, debug-level messages
If one particular useless error message bothers you a lot, consider making a
trivial kernel patch like shutup-abit-bp6
(available in the
examples subdirectory
).
Another place to look may be /etc/syslog.conf
; check to see
whether any messages are logged to a console device.
Console screens in Unix-like systems are usually accessed using (n)curses
library routines. These give the user a terminal-independent method of
updating character screens with reasonable optimization. See
ncurses(3X)
and terminfo(5)
.
On a Debian system, there are quite a lot of predefined entries:
$ toe | less # all entries $ toe /etc/terminfo/ | less # user reconfigurable entries
Export your selection as environment variable TERM.
If the terminfo entry for xterm
doesn't work with a non-Debian
xterm
, change your terminal type from "xterm" to one of
the feature-limited versions such as "xterm-r6" when you log in to a
Debian system remotely. See /usr/share/doc/libncurses5/FAQ
for
more. "dumb" is the lowest common denominator for terminfo.
When the screen goes berserk after cat some-binary-file (you may not be able to see the command echoed as you type):
$ reset
Convert a DOS text file (end-of-line = ^M^J) to a Unix text file (end-of-line = ^J).
# apt-get install sysutils $ dos2unix dosfile
recode
Following will convert text files between DOS, Mac, and Unix line ending styles:
$ recode /cl../cr <dos.txt >mac.txt $ recode /cr.. <mac.txt >unix.txt $ recode ../cl <unix.txt >dos.txt
Free recode
converts files between various character sets and
surfaces with:
$ recode charset1/surface1..charset2/surface2 \ <input.txt >output.txt
Common character sets used are (see also Introduction to locales, Section 9.7.3) [37] :
us — ASCII (7 bits)
l1 — ISO Latin-1 (ISO-8859-1, Western Europe, 8 bits)
EUCJP — EUC-JP for Japanese (Unix)
SJIS — Shift-JIS for Japanese (Microsoft)
ISO2022JP — Mail encoding for Japanese (7 bits)
u2 — UCS-2 (Universal Character Set, 2 bytes)
u8 — UTF-8 (Universal Transformation Format, 8 bits)
Common surfaces used are [38] :
/cr — Carriage return as end of line (Mac text)
/cl — Carriage return line feed as end of line (DOS text)
/ — Line feed as end of line (Unix text)
/d1 — Human readable bytewise decimal dump
/x1 — Human readable bytewise hexidecimal dump
/64 — Base64 encoded text
/QP — Quoted-Printable encoded text
For more, see pertinent description in the info recode.
There are also more specialized conversion tools:
character set conversion:
iconv
— locale encoding conversions
konwert
— fancy encoding conversions
binary file conversion:
uuencode
and uudecode
— for Unix.
mimencode
— for the mail.
Replace all instances of FROM_REGEX with TO_TEXT in all of the files FILES ...:
$ perl -i -p -e 's/FROM_REGEX/TO_TEXT/g;' FILES ...
-i is for "in-place editing", -p is for "implicit loop over FILES ...". If the substitution is complex, you can make recovery from errors easier by using the parameter -i.bak instead of -i; this will keep each original file, adding .bak as a file extension.
The following script will remove lines 5–10 and lines 16–20 in place.
#!/bin/bash ed $1 <<EOF 16,20d 5,10d w q EOF
Here, ed
commands are the same as vi
command-mode
commands. Editing from the back of file makes it easy for scripting.
Following one of these procedures will extract differences between two source files and create unified diff files file.patch0 or file.patch1 depending on the file location:
$ diff -u file.old file.new > file.patch0 $ diff -u old/file new/file > file.patch1
The diff file (alternatively called patch file) is used to send a program update. The receiving party will apply this update to another file by:
$ patch -p0 file < file.patch0 $ patch -p1 file < file.patch1
If you have three versions of source code, you can merge them more effectively
using diff3
:
$ diff3 -m file.mine file.old file.yours > file
$ split -b 650m file # split file into 650MB chunks $ cat x* >largefile # merge files into 1 large file
Let's consider a text file called DPL
in which all previous Debian
project leader's names and their initiation days are listed in a
space-separated format.
Ian Murdock August 1993 Bruce Perens April 1996 Ian Jackson January 1998 Wichert Akkerman January 1999 Ben Collins April 2001 Bdale Garbee April 2002 Martin Michlmayr March 2003
Awk is frequently used to extract data from these types of files.
$ awk '{ print $3 }' <DPL # month started August April January January April April March $ awk '($1=="Ian") { print }' <DPL # DPL called Ian Ian Murdock August 1993 Ian Jackson January 1998 $ awk '($2=="Perens") { print $3,$4 }' <DPL # When Perens started April 1996
Shells such as Bash can be also used to parse this kind of file:
$ while read first last month year; do echo $month done <DPL ... same output as the first Awk example
Here, read
built-in command uses the characters in $IFS (internal
field separators) to split lines into words.
If you change IFS to ":", you can parse /etc/passwd
with
shell nicely:
$ oldIFS="$IFS" # save old value $ IFS=":" $ while read user password uid gid rest_of_line; do if [ "$user" = "osamu" ]; then echo "$user's ID is $uid" fi done < /etc/passwd osamu's ID is 1001 $ IFS="$oldIFS" # restore old value
(If Awk is used to do the equivalent, use FS=":" to set the field separator.)
IFS is also used by the shell to split results of parameter expansion, command substitution, and arithmetic expansion. These do not occur within double or single quoted words. The default value of IFS is <space>, <tab>, and <newline> combined.
Be careful about using this shell IFS tricks. Strange things may happen, when shell interprets some parts of the script as its input.
$ IFS=":," # use ":" and "," as IFS $ echo IFS=$IFS, IFS="$IFS" # echo is a Bash built-in IFS= , IFS=:, $ date -R # just a command output Sat, 23 Aug 2003 08:30:15 +0200 $ echo $(date -R) # sub shell --> input to main shell Sat 23 Aug 2003 08 30 36 +0200 $ unset IFS # reset IFS to the default $ echo $(date -R) Sat, 23 Aug 2003 08:30:50 +0200
The following scripts will do nice things as a part of a pipe.
find /usr | egrep -v "/usr/var|/usr/tmp|/usr/local" # find all files in /usr excluding some files xargs -n 1 command # run command for all items from stdin xargs -n 1 echo | # split white-space-separated items into lines xargs echo | # merge all lines into a line grep -e pattern| # extract lines containing pattern cut -d: -f3 -| # extract third field separated by : (passwd file etc.) awk '{ print $3 }' | # extract third field separated by whitespaces awk -F'\t' '{ print $3 }' | # extract third field separated by tab col -bx | # remove backspace and expand tabs to spaces expand -| # expand tabs sort -u| # sort and remove duplicates tr '\n' ' '| # concatenate lines into one line tr '\r' ''| # remove CR tr 'A-Z' 'a-z'| # convert uppercase to lowercase sed 's/^/# /'| # make each line a comment sed 's/\.ext//g'| # remove .ext sed -n -e 2p| # print the second line head -n 2 -| # print the first 2 lines tail -n 2 -| # print the last 2 lines
The following ways of looping over each file matching *.ext ensures proper handling of funny file names such as ones with spaces and performs equivalent process:
Shell loop (This example is multi line style with PS2=" ". To do the same in one line, you insert a semicolon for each line break.):
for x in *.ext; do if test -f "$x"; then command "$x" fi done
find
and xargs
combination:
find . -type f -maxdepth 1 -name '*.ext' -print0 | \ xargs -0 -n 1 command
find
with -exec option with a command:
find . -type f -maxdepth 1 -name '*.ext' \ -exec command '{}' \;
find
with -exec option with a short shell script:
find . -type f -maxdepth 1 -name '*.ext' \ -exec sh -c "command '{}' && echo 'successful'" \;
Although any Awk scripts can be automatically rewritten in Perl using
a2p(1)
, one-liner Awk scripts are best converted to one-liner perl
scripts manually. For example
awk '($2=="1957") { print $3 }' |
is equivalent to any one of the following lines:
perl -ne '@f=split; if ($f[1] eq "1957") { print "$f[2]\n"}' | perl -ne 'if ((@f=split)[1] eq "1957") { print "$f[2]\n"}' | perl -ne '@f=split; print $f[2] if ( $f[1]==1957 )' | perl -lane 'print $F[2] if $F[1] eq "1957"' |
Since all the whitespace in the arguments to perl
in the line
above can be removed, and taking advantage of the automatic conversions between
numbers and strings in Perl:
perl -lane 'print$F[2]if$F[1]eq+1957' |
See perlrun(1)
for the command-line options. For more crazy Perl
scripts, http://perlgolf.sourceforge.net
may be interesting.
The following will read a web page into a text file. Very useful when copying configurations off the Web.
$ lynx -dump http://www.remote-site.com/help-info.html >textfile
links
and w3m
can be used here, too, with slight
differences in rendering.
If this is a mailing list archive, use munpack
to obtain mime
contents from text.
The following will print a web page into a PostScript file/printer.
$ apt-get install html2ps $ html2ps URL | lpr
See lpr
/lpd
,
Section 3.6.1. Also check a2ps
and mpage
packages for creating PostScript files.
The following will print a manual page into a PostScript file/printer.
$ man -Tps some-manpage | lpr $ man -Tps some-manpage | mpage -2 | lpr
You can merge two PostScript or PDF files.
$ gs -q -dNOPAUSE -dBATCH -sDEVICE=pswrite \ -sOutputFile=bla.ps -f foo1.ps foo2.ps $ gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite \ -sOutputFile=bla.pdf -f foo1.pdf foo2.pdf
Display time used by a process.
# time some-command >/dev/null real 0m0.035s # time on wall clock (elapsed real time) user 0m0.000s # time in user mode sys 0m0.020s # time in kernel mode
nice
command
Use nice
(from the GNU shellutils
package) to set a
command's nice value when starting. renice
(bsdutils
) and top
can renice a process. A nice
value of 19 represents the slowest (lowest priority) process; negative values
are "not-nice", with -20 being a very fast (high priority) process.
Only the superuser can set negative nice values.
# nice -19 top # very nice # nice --20 wodim -v -eject speed=2 dev=0,0 disk.img # very fast
Sometimes an extreme nice value does more harm than good to the system. Use this command carefully.
cron
, at
)
Use cron
and at
to schedule tasks under Linux. See
at(1)
, crontab(5)
, crontab(8)
.
Run the command crontab -e to create or edit a crontab file to set up regularly scheduled events. Example of a crontab file:
# use /bin/sh to run commands, no matter what /etc/passwd says SHELL=/bin/sh # mail any output to `paul', no matter whose crontab this is MAILTO=paul # Min Hour DayOfMonth Month DayOfWeek command (Day... are OR'ed) # run at 00:05, every day 5 0 * * * $HOME/bin/daily.job >> $HOME/tmp/out 2>&1 # run at 14:15 on the first of every month -- output mailed to paul 15 14 1 * * $HOME/bin/monthly # run at 22:00 on weekdays(1-5), annoy Joe. % for newline, last % for cc: 0 22 * * 1-5 mail -s "It's 10pm" joe%Joe,%%Where are your kids?%.%% 23 */2 1 2 * echo "run 23 minutes after 0am, 2am, 4am ..., on Feb 1" 5 4 * * sun echo "run at 04:05 every sunday" # run at 03:40 on the first Monday of each month 40 3 1-7 * * [ "$(date +%a)" == "Mon" ] && command -args
Run the at
command to schedule a one-time job:
$ echo 'command -args'| at 3:40 monday
screen
The screen
program allows you to run multiple virtual terminals,
each with its own interactive shell, on a single physical terminal or terminal
emulation window. Even if you use Linux virtual consoles or multiple
xterm
windows, it is worth exploring screen
for its
rich feature set, which includes
scrollback history,
copy-and-paste,
output logging,
digraph entry, and
the ability to detach an entire screen
session
from your terminal and reattach it later.
If you frequently log on to a Linux machine from a remote terminal or using a
VT100 terminal program, screen
will make your life much easier
with the detach feature.
You are logged in via a dialup connection, and are running a complex
screen
session with editors and other programs open in several
windows.
Suddenly you need to leave your terminal, but you don't want to lose your work by hanging up.
Simply type ^A d to detach the session, then log
out. (Or, even quicker, type ^A DD to have screen
detach and log you out itself.)
When you log on again later, enter the command screen -r, and
screen
will magically reattach all the windows
you had open.
screen
commands
Once you start screen
, all keyboard input is sent to your current
window except for the command keystroke, by default ^A. All
screen
commands are entered by typing ^A plus a
single key [plus any parameters]. Useful commands:
^A ? show a help screen (display key bindings) ^A c create a new window and switch to it ^A n go to next window ^A p go to previous window ^A 0 go to window number 0 ^A w show a list of windows ^A a send a Ctrl-A to current window as keyboard input ^A h write a hardcopy of current window to file ^A H begin/end logging current window to file ^A ^X lock the terminal (password protected) ^A d detach screen session from the terminal ^A DD detach screen session and log out
This is only a small subset of screen
's commands and features. If
there's something you want screen
to be able to do, chances are it
can! See screen(1)
for details.
screen
session
If you find that backspace and/or Ctrl-H do not work properly when you are
running screen
, edit /etc/screenrc
, find the line
reading
bindkey -k kb stuff "\177"
and comment it out (i.e., add "#" as the first character).
screen
for X
Check out xmove
. See xmove(1)
.
Install netkit-ping
, traceroute
,
dnsutils
, ipchains
(for 2.2 kernel),
iptables
(for 2.4 kernel), and net-tools
packages
and:
$ ping yahoo.com # check Internet connection $ traceroute yahoo.com # trace IP packets $ ifconfig # check host config $ route -n # check routing config $ dig [@dns-server.com] host.dom [{a|mx|any}] |less # check host.dom DNS records by dns-server.com # for a {a|mx|any} record $ ipchains -L -n |less # check packet filter (2.2 kernel) $ iptables -L -n |less # check packet filter (2.4 kernel) $ netstat -a # find all open ports $ netstat -l --inet # find listening ports $ netstat -ln --tcp # find listening TCP ports (numeric)
To flush mail from the local spool:
# exim4 -q # flush waiting mail # exim4 -qf # flush all mail # exim4 -qff # flush even frozen mail
-qff may be better as an option in the
/etc/ppp/ip-up.d/exim
script. For Woody and older distributions,
replace exim4
with exim
.
To remove frozen mail from the local spool with a delivery error message:
# exim4 -Mg `mailq | grep frozen | awk '{ print $3 }'`
For Woody and older distributions, replace exim4
with
exim
.
mbox
contents
You need to manually deliver mails to the sorted mailboxes in your home
directory from /var/mail/username
if your home
directory became full and procmail
failed. After making disk
space in the home directory, run:
# /etc/init.d/exim4 stop # formail -s procmail </var/mail/username # /etc/init.d/exim4 start
For Woody and older distributions, replace exim4
with
exim
.
In order to clear the contents of a file such as a logfile, do not use rm to delete the file and then create a new empty file, because the file may still be accessed in the interval between commands. The following is the safe way to clear the contents of the file.
$ :>file-to-be-cleared
The following commands will create dummy or empty files:
$ dd if=/dev/zero of=filename bs=1k count=5 # 5KB of zero content $ dd if=/dev/urandom of=filename bs=1M count=7 # 7MB of random content $ touch filename # create 0B file (if file exists, updates mtime)
For example, the following commands executed from the shell of the Debian boot
floppy will erase all the content of the hard disk /dev/hda
completely for most practical uses.
# dd if=/dev/urandom of=/dev/hda; dd if=/dev/zero of=/dev/hda
chroot
The chroot
program, chroot(8)
, enables us to run
different instances of the GNU/Linux environment on a single system
simultaneously without rebooting.
One may also run a resource hungry program such as apt-get
or
dselect
under the chroot of a fast host machine while NFS-mounting
a slow satellite machine to the host as r/w and the chroot point being the
mount point of the satellite machine.
chroot
A chroot Debian environment can easily be created by the
debootstrap
command in Sarge. For post-Sarge distributions, you
may use cdebootstrap
command instead with appropriate option. For
example, to create a Sid chroot on /sid-root while having fast
Internet access:
main # cd /; mkdir /sid-root main # debootstrap sid /sid-root http://ftp.debian.org/debian/ ... watch it download the whole system main # echo "proc /sid-root/proc proc none 0 0" >> /etc/fstab main # mount /sid-root/proc main # mount /dev/ /sid-root/dev -o bind main # cp /etc/hosts /sid-root/etc/hosts main # chroot /sid-root /bin/bash chroot # cd /dev; /sbin/MAKEDEV generic; cd - chroot # apt-setup # set-up /etc/apt/sources.list chroot # vi /etc/apt/sources.list # point the source to unstable chroot # dselect # you may use aptitude, install mc and vim :-)
At this point you should have a fully working Debian system, where you can play around without fear of affecting your main Debian installation.
This debootstrap
trick can also be used to install Debian to a
system without using a Debian install disk, but instead one for another
GNU/Linux distribution. See http://www.debian.org/releases/stable/i386/apcs04
.
chroot
Typing chroot /sid-root /bin/bash is easy, but it retains all sorts of environment variables that you may not want, and has other issues. A much better approach is to run another login process on a separate virtual terminal where you can log in to the chroot directly.
Since on default Debian systems tty1 to tty6 run
Linux consoles and tty7 runs the X Window System, let's set up
tty8 for a chrooted console as an example. After creating a
chroot system as described in Run a different Debian
distribution with chroot
, Section 8.6.35.1, type from the root
shell of the main system:
main # echo "8:23:respawn:/usr/sbin/chroot /sid-root "\ "/sbin/getty 38400 tty8" >> /etc/inittab main # init q # reload init
chroot
You want to run the latest X and GNOME safely in your chroot? That's entirely possible! The following example will make GDM run on virtual terminal vt9.
First install a chroot system using the method described in Run a different Debian distribution with
chroot
, Section 8.6.35.1. From the root of the main system,
copy key configuration files to the chroot system.
main # cp /etc/X11/XF86Config-4 /sid-root/etc/X11/XF86Config-4 main # chroot /sid-root # or use chroot console chroot # cd /dev; /sbin/MAKEDEV generic; cd - chroot # apt-get install gdm gnome x-window-system chroot # vi /etc/gdm/gdm.conf # do s/vt7/vt9/ in [servers] section chroot # /etc/init.d/gdm start
Here, /etc/gdm/gdm.conf
was edited to change the first virtual
console from vt7 to vt9.
Now you can easily switch back and forth between full X environments in your chroot and your main system just by switching between Linux virtual terminals; e.g. by using Ctrl-Alt-F7 and Ctrl-Alt-F9. Have fun!
[FIXME] Add a comment and link to the init script of the chrooted
gdm
.
chroot
A chroot environment for another Linux distribution can easily be created. You
install a system into separate partitions using the installer of the other
distribution. If its root partition is in /dev/hda9
:
main # cd /; mkdir /other-dist main # mount -t ext3 /dev/hda9 /other-dist main # chroot /other-dist /bin/bash
Then proceed as in Run a different Debian
distribution with chroot
, Section 8.6.35.1, Setting up login for chroot
, Section
8.6.35.2, and Setting up X for chroot
,
Section 8.6.35.3.
chroot
There is a more specialized chroot package, pbuilder
, which
constructs a chroot system and builds a package inside the chroot. It is an
ideal system to use to check that a package's build-dependencies are correct,
and to be sure that unnecessary and wrong build dependencies will not exist in
the resulting package.
You can check whether two files are the same file with two hard links by:
$ ls -li file1 file2
mount
hard disk image file
If file.img
contains an image of hard disk contents and
the original hard disk had a disk configuration which gives xxxx =
(bytes/sector) * (sectors/cylinder), then the following will mount it to
/mnt
:
# mount -o loop,offset=xxxx file.img /mnt
Note that most hard disks have 512 bytes/sector.
Basics of getting files from Windows:
# mount -t smbfs -o username=myname,uid=my_uid,gid=my_gid \ //server/share /mnt/smb # mount Windows files to Linux # smbmount //server/share /mnt/smb \ -o "username=myname,uid=my_uid,gid=my_gid" # smbclient -L 192.168.1.2 # list the shares on a computer
Samba neighbors can be checked from Linux using:
# smbclient -N -L ip_address_of_your_PC | less # nmblookup -T "*"
Many foreign filesystems have Linux kernel support, and can thus be accessed simply by mounting the devices containing the filesystems. For certain filesystems, there are also a few specialized tools to access the filesystems without mounting the devices. This is accomplished with user-space programs so that kernel filesystem support is not needed.
mtools
: for MS-DOS filesystem (MS-DOS, Windows)
cpmtools
: for CP/M filesystem
hfsutils
: for HFS filesystem (native Macintosh)
hfsplus
: for HFS+ filesystem (modern Macintosh)
In order to create and check an MS-DOS FAT filesystem, dosfstools
is useful.
Here are few examples of dangerous actions. The negative impacts will be enhanced if you are using privileged account: root.
The use of wild card file name in command line arguments such as "rm -rf .*" may cause dangerous result, since ".*" expands to include "." and "..". Fortunately for the current verion of "rm" command in the Debian distribution, it checks sanity of the argument file names and refuses to remove "." and "..". But this is not always the case. Try following to see how the wild card file names work.
"echo *": lists every non-dot files and non-dot directories under current directory.
"echo .[^.]*": lists every dot file and dot-directories under current directory.
"echo .*": lists everything under parent directory and parent directory itself.
Loss of some important files such as /etc/passwd
through your
stupidity is tough. The Debian system makes regular backups of them in
/var/backups/
. When you restore these files, you may manually
have to set the proper permissions.
# cp /var/backups/passwd /etc/passwd # chmod 644 /etc/passwd
See also Recover package selection data, Section 6.3.4.
[ previous ] [ Contents ] [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ A ] [ next ]
Debian Reference
osamu#at#debian.org