Table of Contents
This is a quickstart that shows a full installation of the Toolkit on two Debian 3.1 machines. It shows the installation of prereqs, installation of the toolkit, creation of certificates, and configuration of services. It is designed to supplement the main admin guide.
The installer used throughout this document is the GT4.0.1 installer. There are no changes required to use this document with later 4.0.x installers. You should use the most current version available.
I will be installing all of the toolkit from source, so I'm going to double-check my system for pre-requisites. The full list of prereqs is available at Software Prerequisites in the Admin Guide.
First I'll check for zlib development libraries for GSI-OpenSSH:
choate
% dpkg --list | grep zlib
ii zlib-bin 1.2.2-4.sarge. compression library - sample programs ii zlib1g 1.2.2-4.sarge. compression library - runtime ii zlib1g-dev 1.2.2-4.sarge. compression library - development
I have zlib1g-dev installed, so I will be okay for building GSI-OpenSSH.
![]() | Note |
---|---|
The package names may vary for non-Debian systems. The RPM name
we would look for is |
Next, I'll install java from Sun. It's called the "J2SE SDK" on their website.
root@choate:/usr/java#
./j2sdk-1_4_2_10-linux-i586.bin
Sun Microsystems, Inc. Binary Code License Agreement for the JAVATM 2 SOFTWARE DEVELOPMENT KIT (J2SDK), STANDARD EDITION, ... Creating j2sdk1.4.2_10/jre/lib/plugin.jar Creating j2sdk1.4.2_10/jre/javaws/javaws.jar Done.
Next, we install ant:
root@choate:/usr/local#
tar xzf apache-ant-1.6.5-bin.tar.gz
root@choate:/usr/local#
ls apache-ant-1.6.5
bin INSTALL LICENSE LICENSE.xerces TODO docs KEYS LICENSE.dom NOTICE welcome.html etc lib LICENSE.sax README WHATSNEW
![]() | Note |
---|---|
This was fine on my Debian box, because it doesn't come with ant pre-installed. Most RedHat and Fedora Core boxes already ship with ant, but it is configured to use gcj. We don't want to use gcj! To fix this, look for an /etc/ant.conf file. If you have one, rename it to /etc/ant.conf.orig for the duration of this quickstart. |
My system already has C/C++ compilers:
choate
% gcc --version
gcc (GCC) 3.3.5 (Debian 1:3.3.5-13) Copyright (C) 2003 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.choate
% g++ --version
g++ (GCC) 3.3.5 (Debian 1:3.3.5-13) Copyright (C) 2003 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
GNU versions of tar/make/sed:
choate
% tar --version
tar (GNU tar) 1.14 Copyright (C) 2004 Free Software Foundation, Inc. This program comes with NO WARRANTY, to the extent permitted by law. You may redistribute it under the terms of the GNU General Public License; see the file named COPYING for details. Written by John Gilmore and Jay Fenlason.choate
% sed --version
GNU sed version 4.1.2 Copyright (C) 2003 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, to the extent permitted by law.choate
% make --version
GNU Make 3.80 Copyright (C) 2002 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
And perl, of course:
choate %
perl --version
This is perl, v5.8.4 built for i386-linux-thread-multi Copyright 1987-2004, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using `man perl' or `perldoc perl'. If you have access to the Internet, point your browser at http://www.perl.com/, the Perl Home Page.
I have sudo for GRAM:
choate
% sudo -V
Sudo version 1.6.8p7
Let's check for postgres:
choate
% dpkg --list | grep postgres
ii postgresql-cli 7.4.7-6sarge1 front-end programs for PostgreSQLchoate
% dpkg --list | grep psql
choate
%
postgresql-cli is just the front-end programs, not the postgresql server. In Debian, the server package is just known as "postgresql". I'll install it:
root@choate:/usr/local#
apt-get install postgresql
Reading Package Lists... Done Building Dependency Tree... Done Suggested packages: libpg-perl libpgjava libpgtcl postgresql-doc postgresql-dev postgresql-contrib pidentd ident-server pgdocs pgaccess The following NEW packages will be installed: postgresql ... Success. The database server should be started automatically. If not, you can start the database server using: /etc/init.d/postgresql start
I will have to edit the configuration files later for RFT, but having it installed is enough for now.
For the sake of completeness, I will also install IODBC, which is an optional prereq for RLS:
root@choate:/root#
apt-get install libiodbc2 libiodbc2-dev
Reading Package Lists... Done Building Dependency Tree... Done The following NEW packages will be installed: libiodbc2 libiodbc2-dev ... Setting up libiodbc2 (3.52.2-3) ... Setting up libiodbc2-dev (3.52.2-3) ...root@choate:/root#
That completes the list of build prereqs, so now I will download the installer and build it. The long version of these instructions is at Installing in the Admin Guide.
root@cognito:~#
adduser globus
Adding user `globus'... Adding new group `globus' (1023). Adding new user `globus' (1023) with group `globus'. Creating home directory `/home/globus'. Copying files from `/etc/skel' Enter new UNIX password:********
Retype new UNIX password:********
passwd: password updated successfully Changing the user information for globus Enter the new value, or press ENTER for the default Full Name []: Globus Room Number []: Work Phone []: Home Phone []: Other []: Is the information correct? [y/N]y
root@choate:/etc/init.d#
mkdir /usr/local/globus-4.0.1/
root@choate:/etc/init.d#
chown globus:globus /usr/local/globus-4.0.1/
Now, as the newly created globus user:
globus@choate:~$
tar xzf gt4.0.1-all-source-installer.tar.gz
globus@choate:~$
cd gt4.0.1-all-source-installer
globus@choate:~/gt4.0.1-all-source-installer$
./configure --prefix=/usr/local/globus-4.0.1/ \
--with-iodbc=/usr/lib
checking build system type... i686-pc-linux-gnu checking for javac... no configure: WARNING: A Java compiler is needed for some parts of the toolkitconfigure: WARNING: This message can be ignored if you are only building the C parts of the toolkit checking for ant... no configure: WARNING: ant is needed for some parts of the toolkit configure: WARNING: If you know you will not need one configure: creating ./config.status config.status: creating Makefile
Let's setup my java environment and try again:
globus@choate:~/gt4.0.1-all-source-installer$
export ANT_HOME=/usr/local/apache-ant-1.6.5
globus@choate:~/gt4.0.1-all-source-installer$
export JAVA_HOME=/usr/java/j2sdk1.4.2_10/
globus@choate:~/gt4.0.1-all-source-installer$
export PATH=$ANT_HOME/bin:$JAVA_HOME/bin:$PATH
globus@choate:~/gt4.0.1-all-source-installer$
./configure --prefix=/usr/local/globus-4.0.1/ \
--with-iodbc=/usr/lib
checking build system type... i686-pc-linux-gnu checking for javac... /usr/java/j2sdk1.4.2_10//bin/javac checking for ant... /usr/local/apache-ant-1.6.5/bin/ant configure: creating ./config.status config.status: creating Makefile
Much better!
![]() | Note |
---|---|
The machine I am installing on doesn't have access to a scheduler. If it did, I would have specified one of the wsgram scheduler options, like |
![]() | Note |
---|---|
Note that I really could have used the binary installer for this example, since Debian ia32 binaries are available. To make the quickstart more general, I decided to use source instead. |
Now it's time to build the toolkit:
globus@choate:~/gt4.0.1-all-source-installer$
make | tee installer.log
cd gpt-3.2autotools2004 && OBJECT_MODE=32 ./build_gpt build_gpt ====> installing GPT into /usr/local/globus-4.0.1/ ... Time for a coffee break here, the build will take over an hour, possibly longer depending on how fast your machine is ... echo "Your build completed successfully. Please run make install." Your build completed successfully. Please run make install.globus@choate:~/gt4.0.1-all-source-installer$
make install
/usr/local/globus-4.0.1//sbin/gpt-postinstall ... ..Doneglobus@choate:~/gt4.0.1-all-source-installer$
Now that the toolkit is installed, we're going to want hostcerts for the machine, and a usercert for me. To do that, we're going to use the SimpleCA that is distributed with the toolkit. Here's how we set it up, based on the instructions at SimpleCA Admin:
globus@choate:~$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
globus@choate:~$
source $GLOBUS_LOCATION/etc/globus-user-env.sh
globus@choate:~$
$GLOBUS_LOCATION/setup/globus/setup-simple-ca
WARNING: GPT_LOCATION not set, assuming: GPT_LOCATION=/usr/local/globus-4.0.1 C e r t i f i c a t e A u t h o r i t y S e t u p This script will setup a Certificate Authority for signing Globus users certificates. It will also generate a simple CA package that can be distributed to the users of the CA. The CA information about the certificates it distributes will be kept in: /home/globus/.globus/simpleCA/ /usr/local/globus-4.0.1/setup/globus/setup-simple-ca: line 250: test: res: integer expression expected The unique subject name for this CA is: cn=Globus Simple CA, ou=simpleCA-choate.mcs.anl.gov, ou=GlobusTest, o=Grid Do you want to keep this as the CA subject (y/n) [y]:y
Enter the email of the CA (this is the email where certificate requests will be sent to be signed by the CA):bacon@choate
The CA certificate has an expiration date. Keep in mind that once the CA certificate has expired, all the certificates signed by that CA become invalid. A CA should regenerate the CA certificate and start re-issuing ca-setup packages before the actual CA certificate expires. This can be done by re-running this setup script. Enter the number of DAYS the CA certificate should last before it expires. [default: 5 years (1825 days)]:RETURN
Enter PEM pass phrase:******
Verifying - Enter PEM pass phrase:******
/bin/sed: can't read /tmp//globus_tmp_ca_setup//pkgdata/pkg_data_src.gpt.tmpl: No such file or directory creating CA config package... A self-signed certificate has been generated for the Certificate Authority with the subject: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/CN=Globus Simple CA If this is invalid, rerun this script /usr/local/globus-4.0.1/setup/globus/setup-simple-ca and enter the appropriate fields. ------------------------------------------------------------------- The private key of the CA is stored in /home/globus/.globus/simpleCA//private/cakey.pem The public CA certificate is stored in /home/globus/.globus/simpleCA//cacert.pem The distribution package built for this CA is stored in /home/globus/.globus/simpleCA//globus_simple_ca_ebb88ce5_setup-0.18.tar.gz This file must be distributed to any host wishing to request certificates from this CA. CA setup complete. The following commands will now be run to setup the security configuration files for this CA: $GLOBUS_LOCATION/sbin/gpt-build \ /home/globus/.globus/simpleCA//globus_simple_ca_ebb88ce5_setup-0.18.tar.gz $GLOBUS_LOCATION/sbin/gpt-postinstall ------------------------------------------------------------------- setup-ssl-utils: Configuring ssl-utils package Running setup-ssl-utils-sh-scripts... *************************************************************************** Note: To complete setup of the GSI software you need to run the following script as root to configure your security configuration directory: /usr/local/globus-4.0.1/setup/globus_simple_ca_ebb88ce5_setup/setup-gsi For further information on using the setup-gsi script, use the -help option. The -default option sets this security configuration to be the default, and -nonroot can be used on systems where root access is not available. *************************************************************************** setup-ssl-utils: Completeglobus@choate:~$
That's quite a lot of output. Here's what has happened:
globus@choate:~$
ls ~/.globus/
simpleCAglobus@choate:~$
ls ~/.globus/simpleCA/
cacert.pem globus_simple_ca_ebb88ce5_setup-0.18.tar.gz newcerts certs grid-ca-ssl.conf private crl index.txt serial
That's the directory where my simpleCA has been created. Now I need to make my machine trust that new CA, which I do by running the following command as root:
root@choate:~#
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
root@choate:~#
$GLOBUS_LOCATION/setup/globus_simple_ca_ebb88ce5_setup/setup-gsi -default
setup-gsi: Configuring GSI security Making /etc/grid-security... mkdir /etc/grid-security Making trusted certs directory: /etc/grid-security/certificates/ mkdir /etc/grid-security/certificates/ Installing /etc/grid-security/certificates//grid-security.conf.ebb88ce5... Running grid-security-config... Installing Globus CA certificate into trusted CA certificate directory... Installing Globus CA signing policy into trusted CA certificate directory... setup-gsi: Completeroot@choate:~#
ls /etc/grid-security/
certificates globus-host-ssl.conf globus-user-ssl.conf grid-security.confroot@choate:~#
ls /etc/grid-security/certificates/
ebb88ce5.0 globus-user-ssl.conf.ebb88ce5 ebb88ce5.signing_policy grid-security.conf.ebb88ce5 globus-host-ssl.conf.ebb88ce5
Those are the configuration files that establish trust for the simpleCA for my Globus Toolkit installation. Notice that the hash value ebb88ce5 matches the hash value of my SimpleCA. These files are all explained in Security Admin.
Now that we've created a CA and trust it, we'll get a hostcert for the machine:
root@choate:~#
source $GLOBUS_LOCATION/etc/globus-user-env.sh
root@choate:~#
grid-cert-request -host `hostname`
Generating a 1024 bit RSA private key ..++++++ ...................................................++++++ writing new private key to '/etc/grid-security/hostkey.pem' ... Your certificate will be mailed to you within two working days. If you receive no response, contact Globus Simple CA at bacon@choate
We need to sign the certificate using our simpleCA, as globus:
globus@choate:~$
grid-ca-sign -in /etc/grid-security/hostcert_request.pem -out hostsigned.pem
To sign the request please enter the password for the CA key:******
The new signed certificate is at: /home/globus/.globus/simpleCA//newcerts/01.pem
Our last step is to copy that signed certificate into /etc
:
root@choate:~#
cp ~globus/hostsigned.pem /etc/grid-security/hostcert.pem
The hostcert and hostkey are owned by root, and will be used by the GridFTP server. Because the webservices container runs non-root, we need a certificate owned by globus. In the end, we need one host certificate/key owned by root, and one host certificate/key owned by globus. We do that by copying the files:
root@choate:/etc/grid-security#
cp hostcert.pem containercert.pem
root@choate:/etc/grid-security#
cp hostkey.pem containerkey.pem
root@choate:/etc/grid-security#
chown globus:globus container*.pem
root@choate:/etc/grid-security#
ls -l *.pem
-r-------- 1 globus globus 887 2005-11-15 07:48 containerkey.pem -rw-r--r-- 1 globus globus 2710 2005-11-15 07:48 containercert.pem -rw-r--r-- 1 root root 2710 2005-11-15 07:47 hostcert.pem -rw-r--r-- 1 root root 1404 2005-11-15 07:40 hostcert_request.pem -r-------- 1 root root 887 2005-11-15 07:40 hostkey.pem
Now we'll get a usercert for bacon. In this example I'm running tcsh, just
to show that the version of globus-user-env
depends on your
shell:
choate
% setenv GLOBUS_LOCATION /usr/local/globus-4.0.1/
choate
% source $GLOBUS_LOCATION/etc/globus-user-env.csh
choate
% grid-cert-request
A certificate request and private key is being created. You will be asked to enter a PEM pass phrase. This pass phrase is akin to your account password, and is used to protect your key file. If you forget your pass phrase, you will need to obtain a new certificate. Generating a 1024 bit RSA private key .........................................................++++++ .........................++++++ unable to write 'random state' writing new private key to '/home/bacon/.globus/userkey.pem' Enter PEM pass phrase:****
Verifying - Enter PEM pass phrase:****
----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank ----- Level 0 Organization [Grid]: Level 0 Organizational Unit [GlobusTest]: Level 1 Organizational Unit [simpleCA-choate.mcs.anl.gov]: Level 2 Organizational Unit [mcs.anl.gov]: Name (e.g., John M. Smith) []: A private key and a certificate request has been generated with the subject: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon If the CN=Charles Bacon is not appropriate, rerun this script with the -force -cn "Common Name" options. Your private key is stored in /home/bacon/.globus/userkey.pem Your request is stored in /home/bacon/.globus/usercert_request.pem Please e-mail the request to the Globus Simple CA bacon@choate You may use a command similar to the following: cat /home/bacon/.globus/usercert_request.pem | mail bacon@choate Only use the above if this machine can send AND receive e-mail. if not, please mail using some other method. Your certificate will be mailed to you within two working days. If you receive no response, contact Globus Simple CA at bacon@choate
Now I need to get that certificate request to the globus user so it can be signed, then send the signed cert back to bacon:
choate %
cat /home/bacon/.globus/usercert_request.pem | mail globus@choate
Now, sign it as user globus:
globus@choate:~$
grid-ca-sign -in request.pem -out signed.pem
To sign the request please enter the password for the CA key:******
The new signed certificate is at: /home/globus/.globus/simpleCA//newcerts/02.pemglobus@choate:~$
cat signed.pem | mail bacon@choate
Now user bacon checks his mail and copies the cert to the proper location:
choate %
cp signed.pem ~/.globus/usercert.pem
choate %
ls -l ~/.globus/
total 12 -rw-r--r-- 1 bacon globdev 895 2005-11-15 07:57 usercert.pem -rw-r--r-- 1 bacon globdev 1426 2005-11-15 07:51 usercert_request.pem -r-------- 1 bacon globdev 963 2005-11-15 07:51 userkey.pem
Our last act will be to create a grid-mapfile as root for authorization:
root@choate:/etc/grid-security#
vim /etc/grid-security/grid-mapfile
root@choate:/etc/grid-security#
cat /etc/grid-security/grid-mapfile
"/O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon" bacon
![]() | Note |
---|---|
The globus user doesn't need a user certificate! It's a dummy account that we're using to own the GLOBUS_LOCATION. When it starts the container, it will use the containercert. Only real people need user certs. |
Now that we have our secure credentials in place, we can start a service. This setup comes from the GridFTP Admin Guide.
root@choate:/etc/grid-security#
vim /etc/xinetd.d/gridftp
![]()
root@choate:/etc/grid-security#
cat /etc/xinetd.d/gridftp
service gsiftp { instances = 100 socket_type = stream wait = no user = root env += GLOBUS_LOCATION=/usr/local/globus-4.0.1 env += LD_LIBRARY_PATH=/usr/local/globus-4.0.1/libserver = /usr/local/globus-4.0.1/sbin/globus-gridftp-server server_args = -i log_on_success += DURATION nice = 10 disable = no }
root@choate:/etc/grid-security#
vim /etc/services
root@choate:/etc/grid-security#
tail /etc/services
vboxd 20012/udp binkp 24554/tcp # binkp fidonet protocol asp 27374/tcp # Address Search Protocol asp 27374/udp dircproxy 57000/tcp # Detachable IRC Proxy tfido 60177/tcp # fidonet EMSI over telnet fido 60179/tcp # fidonet EMSI over TCP # Local services gsiftp 2811/tcproot@choate:/etc/grid-security#
/etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.root@choate:/etc/grid-security#
netstat -an | grep 2811
tcp 0 0 0.0.0.0:2811 0.0.0.0:* LISTEN
![]() | I already had xinetd installed: bacon@choate:~$ dpkg --list xinetd Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Installed/Config-files/Unpacked/Failed-config/Half-installed |/ Err?=(none)/Hold/Reinst-required/X=both-problems (Status,Err: uppercase=bad) ||/ Name Version Description +++-==============-==============-============================================ ii xinetd 2.3.13-3 replacement for inetd with many enhancements You can use inetd instead, see GridFTP xinetd/inetd examples for details. For now, though, you might want to apt-get install xinetd. |
![]() | On MacOS X, this would be DYLD_LIBRARY_PATH. Check your system documentation if LD_LIBARARY_PATH doesn't work on your system. |
Now the gridftp server is waiting for a request, so we'll run a client and transfer a file:
choate %
grid-proxy-init -verify -debug
User Cert File: /home/bacon/.globus/usercert.pem User Key File: /home/bacon/.globus/userkey.pem Trusted CA Cert Dir: /etc/grid-security/certificates Output File: /tmp/x509up_u1817 Your identity: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon Enter GRID pass phrase for this identity:****
Creating proxy .....++++++++++++ ..++++++++++++ Done Proxy Verify OK Your proxy is valid until: Tue Nov 15 20:15:46 2005choate
% globus-url-copy gsiftp://choate.mcs.anl.gov/etc/group file:///tmp/bacon.test.copy
choate
% diff /tmp/bacon.test.copy /etc/group
choate
%
Okay, so the GridFTP server works. If you had trouble, start with the GridFTP Troubleshooting guide. If the trouble is with your certificates, check the security troubleshooting at Security Troubleshooting. Now we can move on to starting the webservices container.
Now we'll setup an /etc/init.d entry for the webservices container. You can find more details about the container at Container Admin Guide.
globus@choate:~$
vim $GLOBUS_LOCATION/start-stop
globus@choate:~$
cat $GLOBUS_LOCATION/start-stop
#! /bin/sh set -e export GLOBUS_LOCATION=/usr/local/globus-4.0.1 export JAVA_HOME=/usr/java/j2sdk1.4.2_10/ export ANT_HOME=/usr/local/apache-ant-1.6.5 export GLOBUS_OPTIONS="-Xms256M -Xmx512M". $GLOBUS_LOCATION/etc/globus-user-env.sh cd $GLOBUS_LOCATION case "$1" in start) $GLOBUS_LOCATION/sbin/globus-start-container-detached -p 8443 ;; stop) $GLOBUS_LOCATION/sbin/globus-stop-container-detached ;; *) echo "Usage: globus {start|stop}" >&2 exit 1 ;; esac exit 0
globus@choate:~$
chmod +x $GLOBUS_LOCATION/start-stop
![]() | GLOBUS_OPTIONS can be used to pass options to the JVM. Here we are setting heap sizes recommended in the Admin Guide. |
Now, as root, we'll create an /etc/init.d script to call the globus user's start-stop script:
root@choate:~#
vim /etc/init.d/globus-4.0.1
root@choate:~#
cat /etc/init.d/globus-4.0.1
#!/bin/sh -e case "$1" in start) su - globus /usr/local/globus-4.0.1/start-stop start ;; stop) su - globus /usr/local/globus-4.0.1/start-stop stop ;; restart) $0 stop sleep 1 $0 start ;; *) printf "Usage: $0 {start|stop|restart}\n" >&2 exit 1 ;; esac exit 0root@choate:~#
chmod +x /etc/init.d/globus-4.0.1
root@choate:~#
/etc/init.d/globus-4.0.1 start
Starting Globus container. PID: 29985root@choate:~# cat /usr/local/globus-4.0.1/var/container.log
2005-11-15 08:48:00,886 ERROR service.ReliableFileTransferImpl [main,<init>:68] Unable to setup database driver with pooling.A connection error has occurred: FATAL: no pg_hba.conf entry for host "140.221.8.31", user "globus", database "rftDatabase", SSL off2005-11-15 08:48:02,183 WARN service.ReliableFileTransferHome [main,initialize:97] All RFT requests will fail and all GRAM jobs that require file staging will fail. A connection error has occurred: FATAL: no pg_hba.conf entry for host "140.221.8.31", user "globus", database "rftDatabase", SSL off Starting SOAP server at: https://140.221.8.31:8443/wsrf/services/
With the following services: [1]: https://140.221.8.31:8443/wsrf/services/TriggerFactoryService [2]: https://140.221.8.31:8443/wsrf/services/DelegationTestService [3]: https://140.221.8.31:8443/wsrf/services/SecureCounterService [4]: https://140.221.8.31:8443/wsrf/services/IndexServiceEntry [5]: https://140.221.8.31:8443/wsrf/services/DelegationService [6]: https://140.221.8.31:8443/wsrf/services/InMemoryServiceGroupFactory [7]: https://140.221.8.31:8443/wsrf/services/mds/test/execsource/IndexService [8]: https://140.221.8.31:8443/wsrf/services/mds/test/subsource/IndexService [9]: https://140.221.8.31:8443/wsrf/services/SubscriptionManagerService [10]: https://140.221.8.31:8443/wsrf/services/TestServiceWrongWSDL [11]: https://140.221.8.31:8443/wsrf/services/SampleAuthzService [12]: https://140.221.8.31:8443/wsrf/services/WidgetNotificationService [13]: https://140.221.8.31:8443/wsrf/services/AdminService [14]: https://140.221.8.31:8443/wsrf/services/DefaultIndexServiceEntry [15]: https://140.221.8.31:8443/wsrf/services/CounterService [16]: https://140.221.8.31:8443/wsrf/services/TestService [17]: https://140.221.8.31:8443/wsrf/services/InMemoryServiceGroup [18]: https://140.221.8.31:8443/wsrf/services/SecurityTestService [19]: https://140.221.8.31:8443/wsrf/services/ContainerRegistryEntryService [20]: https://140.221.8.31:8443/wsrf/services/NotificationConsumerFactoryService [21]: https://140.221.8.31:8443/wsrf/services/TestServiceRequest [22]: https://140.221.8.31:8443/wsrf/services/IndexFactoryService [23]: https://140.221.8.31:8443/wsrf/services/ReliableFileTransferService [24]: https://140.221.8.31:8443/wsrf/services/mds/test/subsource/IndexServiceEntry [25]: https://140.221.8.31:8443/wsrf/services/Version [26]: https://140.221.8.31:8443/wsrf/services/NotificationConsumerService [27]: https://140.221.8.31:8443/wsrf/services/IndexService [28]: https://140.221.8.31:8443/wsrf/services/NotificationTestService [29]: https://140.221.8.31:8443/wsrf/services/ReliableFileTransferFactoryService [30]: https://140.221.8.31:8443/wsrf/services/DefaultTriggerServiceEntry [31]: https://140.221.8.31:8443/wsrf/services/TriggerServiceEntry [32]: https://140.221.8.31:8443/wsrf/services/PersistenceTestSubscriptionManager [33]: https://140.221.8.31:8443/wsrf/services/mds/test/execsource/IndexServiceEntry [34]: https://140.221.8.31:8443/wsrf/services/DefaultTriggerService [35]: https://140.221.8.31:8443/wsrf/services/TriggerService [36]: https://140.221.8.31:8443/wsrf/services/gsi/AuthenticationService [37]: https://140.221.8.31:8443/wsrf/services/TestRPCService [38]: https://140.221.8.31:8443/wsrf/services/ManagedMultiJobService [39]: https://140.221.8.31:8443/wsrf/services/RendezvousFactoryService [40]: https://140.221.8.31:8443/wsrf/services/WidgetService [41]: https://140.221.8.31:8443/wsrf/services/ManagementService [42]: https://140.221.8.31:8443/wsrf/services/ManagedExecutableJobService [43]: https://140.221.8.31:8443/wsrf/services/InMemoryServiceGroupEntry [44]: https://140.221.8.31:8443/wsrf/services/AuthzCalloutTestService [45]: https://140.221.8.31:8443/wsrf/services/DelegationFactoryService [46]: https://140.221.8.31:8443/wsrf/services/DefaultIndexService [47]: https://140.221.8.31:8443/wsrf/services/ShutdownService [48]: https://140.221.8.31:8443/wsrf/services/ContainerRegistryService [49]: https://140.221.8.31:8443/wsrf/services/TestAuthzService [50]: https://140.221.8.31:8443/wsrf/services/CASService [51]: https://140.221.8.31:8443/wsrf/services/ManagedJobFactoryService 2005-11-15 08:48:29,063 INFO impl.DefaultIndexService [ServiceThread-10,processConfigFile:107] Reading default registration configuration from file: /usr/local/globus-4.0.1/etc/globus_wsrf_mds_index/hierarchy.xml 2005-11-15 08:48:31,705 ERROR impl.QueryAggregatorSource [Thread-12,pollGetMultiple:149] Exception Getting Multiple Resource Properties from https://140.221.8.31:8443/wsrf/services/ReliableFileTransferFactoryService: java.rmi.RemoteException: Failed to serialize resource property org.globus.transfer.reliable.service.factory.TotalNumberOfBytesTransferred@e8eeca; nested exception is: org.apache.commons.dbcp.DbcpException: A connection error has occurred: FATAL: no pg_hba.conf entry for host "140.221.8.31", user "globus", database "rftDatabase", SSL off
![]() | The RFT warnings are expected right now because we haven't setup our database yet. Otherwise, things look good. |
![]() | 140.221.8.31 is my IP address. Some people following the quickstart may see "127.0.0.1" here. You need to fix that! Edit <globalConfiguration> <parameter name="logicalHost" value="140.221.8.32" /> You can also use this to select the interface to publish for a multi-homed host. See Global Configuration for more container config options. |
At this point, we can use one of the sample clients/services to interact with the container:
choate
% setenv JAVA_HOME /usr/java/j2sdk1.4.2_10/
choate
% setenv ANT_HOME /usr/local/apache-ant-1.6.5/
choate
% setenv PATH $ANT_HOME/bin:$JAVA_HOME/bin:$PATH
choate
% counter-client -s https://choate.mcs.anl.gov
Got notification with value: 3 Counter has value: 3 Got notification with value: 13:8443/wsrf/services/CounterService
![]() | Whenever you see me testing against "choate.mcs.anl.gov" in this document, use your own fully qualified hostname. Connections to choate will timeout because the host is behind a firewall. |
That is the expected output, so it looks like the container is up and running. Next we'll configure a database for RFT to get rid of that pesky warning, and so we can reliably transfer files using GridFTP!
Following the instructions at RFT Admin, we'll first configure the system to allow TCP/IP connections to postgres, as well as adding a trust entry for our current host:
root@choate:~#
vim /var/lib/postgres/postmaster.conf
root@choate:~#
grep POSTMASTER /var/lib/postgres/postmaster.conf
POSTMASTER_OPTIONS="-i"root@choate:~#
vim /var/lib/postgres/data/pg_hba.conf
root@choate:~#
grep rftDatabase /etc/postgresql/pg_hba.conf
host rftDatabase "globus" "140.221.8.31" 255.255.255.255 md5root@choate:~#
/etc/init.d/postgresql restart
Stopping PostgreSQL database server: postmaster. Starting PostgreSQL database server: postmaster.root@choate:~#
su postgres -c "createuser -P globus"
Enter password for new user:*****
Enter it again:*****
Shall the new user be allowed to create databases? (y/n) y Shall the new user be allowed to create more new users? (y/n) n CREATE USER
![]() | Note |
---|---|
This is one of the most system-dependent steps of this quickstart. Your pg_hba.conf and postmaster.conf files may be located in a different directory. Please consult your vendor's notes for details. |
Now the globus user can create the rftDatabase:
globus@choate:~$
createdb rftDatabase
CREATE DATABASEglobus@choate:~$
psql -d rftDatabase -f $GLOBUS_LOCATION/share/globus_wsrf_rft/rft_schema.sql
psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:6: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "requestid_pkey" for table "requestid" CREATE TABLE psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:11: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "transferid_pkey" for table "transferid" CREATE TABLE psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:30: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "request_pkey" for table "request" CREATE TABLE psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:65: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "transfer_pkey" for table "transfer" CREATE TABLE CREATE TABLE CREATE TABLE CREATE INDEXglobus@choate:~$
vim $GLOBUS_LOCATION/etc/globus_wsrf_rft/jndi-config.xml
globus@choate:~$
grep -C 3 password $GLOBUS_LOCATION/etc/globus_wsrf_rft/jndi-config.xml
</parameter> <parameter> <name> password </name> <value> *****
I have created the database, loaded the RFT schema, and changed the password in
the jndi-config.xml file. If your database isn't owned by the same user as the
container, you will also need to change the username parameter in the
jndi-config.xml
. In this example, we installed as globus
and made the database as globus, so I only changed the password.
The database is setup, so we restart the container to load the new RFT configuration:
root@choate:~#
/etc/init.d/globus-4.0.1 restart
Stopping Globus container. PID: 29985 Starting Globus container. PID: 8620root@choate:~#
head /usr/local/globus-4.0.1/var/container.log
Starting SOAP server at: https://140.221.8.31:8443/wsrf/services/ With the following services: [1]: https://140.221.8.31:8443/wsrf/services/TriggerFactoryService [2]: https://140.221.8.31:8443/wsrf/services/DelegationTestService [3]: https://140.221.8.31:8443/wsrf/services/SecureCounterService [4]: https://140.221.8.31:8443/wsrf/services/IndexServiceEntry [5]: https://140.221.8.31:8443/wsrf/services/DelegationService [6]: https://140.221.8.31:8443/wsrf/services/InMemoryServiceGroupFactory [7]: https://140.221.8.31:8443/wsrf/services/mds/test/execsource/IndexService ...
Great, we got rid of the warning. Now let's try an RFT transfer to make sure the service is really working:
choate %
cp /usr/local/globus-4.0.1/share/globus_wsrf_rft_test/transfer.xfr /tmp/rft.xfr
choate %
vim /tmp/rft.xfr
choate %
cat /tmp/rft.xfr
true 16000 16000 false 1 true 1 null null false 10 gsiftp://choate.mcs.anl.gov:2811/etc/group gsiftp://choate.mcs.anl.gov:2811/tmp/rftTest_Done.tmpchoate %
rft -h choate.mcs.anl.gov -f /tmp/rft.xfr
Number of transfers in this request: 1 Subscribed for overall status Termination time to set: 60 minutes Overall status of transfer: Finished/Active/Failed/Retrying/Pending 0/1/0/0/0 Overall status of transfer: Finished/Active/Failed/Retrying/Pending 1/0/0/0/0 All Transfers are completedchoate %
diff /etc/group /tmp/rftTest_Done.tmp
choate %
RFT did its job, starting up a reliable transfer and notifying us of the status and results.
Now that we have GridFTP and RFT working, we can setup GRAM for resource management. First we have to setup sudo so the globus user can start jobs as a different user. For reference, you can see the GRAM Admin Guide.
root@choate:~#
visudo
root@choate:~#
cat /etc/sudoers
globus ALL=(bacon) NOPASSWD: /usr/local/globus-4.0.1/libexec/globus-gridmap-and-execute -g /etc/grid-security/grid-mapfile /usr/local/globus-4.0.1/libexec/globus-job-manager-script.pl * globus ALL=(bacon) NOPASSWD: /usr/local/globus-4.0.1/libexec/globus-gridmap-and-execute -g /etc/grid-security/grid-mapfile /usr/local/globus-4.0.1/libexec/globus-gram-local-proxy-tool *
Make sure they're all on one line. I split them up in the HTML to keep the page width down. With that addition, we can now run jobs:
choate %
globusrun-ws -submit -c /bin/true
Submitting job...Done. Job ID: uuid:3304e3f2-55f2-11da-8b8f-00d0b7b7c0bc Termination time: 11/16/2005 16:09 GMT Current job state: Active Current job state: CleanUp Current job state: Done Destroying job...Done.choate %
echo $?
0choate %
globusrun-ws -submit -c /bin/false
Submitting job...Done. Job ID: uuid:456b7c9a-55f2-11da-9b0d-00d0b7b7c0bc Termination time: 11/16/2005 16:09 GMT Current job state: Active Current job state: CleanUp Current job state: Done Destroying job...Done.choate %
echo $?
1
Success. Now we've got a working GRAM installation.
Alas, it's not much of a grid with just one machine. So let's start up on another machine and add it to this little test grid. For a change of pace, I'm going to use the binary installer on this machine. First, though, let's get some prereqs out of the way:
root@cognito:~#
adduser globus
root@cognito:~#
mkdir /usr/local/globus-4.0.1
root@cognito:~#
chown globus:globus /usr/local/globus-4.0.1
root@cognito:/usr/java#
./j2sdk-1_4_2_10-linux-i586.bin
root@cognito:/usr/local#
tar xzf apache-ant-1.6.5-bin.tar.gz
root@cognito:/usr/local#
sudo -V
Sudo version 1.6.8p7 Authentication methods: 'pam' Syslog facility if syslog is being used for logging: authpriv ...
Then, as user globus:
globus@cognito:~$
tar xzf gt4.0.1-ia32_debian_3.1-binary-installer.tar.gz
globus@cognito:~$
export JAVA_HOME=/usr/java/j2sdk1.4.2_10/
globus@cognito:~$
export ANT_HOME=/usr/local/apache-ant-1.6.5/
globus@cognito:~$
export PATH=$ANT_HOME/bin:$JAVA_HOME/bin:$PATH
![]() | Note |
---|---|
You might notice that I didn't install Postgres on this machine. That's because my grid can actually share the services of the RFT located on my first machine. Even if I weren't planning on that, I could add this new machine to the pg_hba.conf on the first machine and re-use the existing DB server. |
Now we can install from binaries:
globus@cognito:~/gt4.0.1-ia32_debian_3.1-binary-installer$
./configure \ --prefix=/usr/local/globus-4.0.1
checking for javac... /usr/java/j2sdk1.4.2_10//bin/javac checking for ant... /usr/local/apache-ant-1.6.5//bin/ant configure: creating ./config.status config.status: creating Makefileglobus@cognito:~/gt4.0.1-ia32_debian_3.1-binary-installer$
make
cd gpt-3.2autotools2004 && OBJECT_MODE=32 ./build_gpt ... Binaries are much faster! This is done in less than 10 minutes. ... tar -C /usr/local/globus-4.0.1 -xzf binary-trees/globus_wsrf_rft_test-*/*.tar.gz tar -C /usr/local/globus-4.0.1 -xzf binary-trees/globus_rendezvous-*/*.tar.gz echo "Your build completed successfully. Please run make install." Your build completed successfully. Please run make install.globus@cognito:~/gt4.0.1-ia32_debian_3.1-binary-installer$
make install
ln -s /usr/local/globus-4.0.1/etc/gpt/packages /usr/local/globus-4.0.1/etc/globus_packages ... config.status: creating fork.pm ..Done
Now let's get security setup on the second machine. We're going to just add trust for the original simpleCA to this new machine, there's no need to create a new one. This is the multiple machines section of the SimpleCA guide.
Please make sure that your two machines agree on the time! These certificates have dates that tell you when they are valid. If your two machines don't agree about the time, you might get errors saying a certificate is not yet valid. If you use NTP, this won't be a problem.
globus@cognito:~$
scp choate:.globus/simpleCA/globus_simple_ca_ebb88ce5_setup-0.18.tar.gz .
globus@cognito:~$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
globus@cognito:~$
$GLOBUS_LOCATION/sbin/gpt-build globus_simple_ca_ebb88ce5_setup-0.18.tar.gz
gpt-build ====> CHECKING BUILD DEPENDENCIES FOR globus_simple_ca_ebb88ce5_setup gpt-build ====> Changing to /sandbox/globus/BUILD/globus_simple_ca_ebb88ce5_setup-0.18/ gpt-build ====> BUILDING globus_simple_ca_ebb88ce5_setup gpt-build ====> Changing to /sandbox/globus/BUILD gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-data gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-dev gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-doc gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-pgm_static gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-rtlglobus@cognito:~$
$GLOBUS_LOCATION/sbin/gpt-postinstall
running /usr/local/globus-4.0.1/setup/globus/./setup-ssl-utils.ebb88ce5.. [ Changing to /usr/local/globus-4.0.1/setup/globus/. ] ... setup-ssl-utils: Complete ..Done WARNING: The following packages were not set up correctly: globus_simple_ca_ebb88ce5_setup-noflavor-pgm Check the package documentation or run postinstall -verbose to see what happened
That installed the package, but the warning is letting us know that root still needs to run the setup script:
root@cognito:~#
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
root@cognito:~#
source $GLOBUS_LOCATION/etc/globus-user-env.sh
root@cognito:~#
$GLOBUS_LOCATION/setup/globus_simple_ca_ebb88ce5_setup/setup-gsi -default
setup-gsi: Configuring GSI security Making /etc/grid-security... mkdir /etc/grid-security Making trusted certs directory: /etc/grid-security/certificates/ mkdir /etc/grid-security/certificates/ Installing /etc/grid-security/certificates//grid-security.conf.ebb88ce5... Running grid-security-config... nstalling Globus CA certificate into trusted CA certificate directory... Installing Globus CA signing policy into trusted CA certificate directory... setup-gsi: Complete
Now our new machine's security directory looks like our other machine:
root@cognito:~# ls /etc/grid-security/ certificates globus-host-ssl.conf globus-user-ssl.conf grid-security.conf root@cognito:~# ls /etc/grid-security/certificates/ ebb88ce5.0 globus-user-ssl.conf.ebb88ce5 ebb88ce5.signing_policy grid-security.conf.ebb88ce5 globus-host-ssl.conf.ebb88ce5
Now we need a hostcert for the new machine:
root@cognito:~#
grid-cert-request -host `hostname`
The hostname cognito does not appear to be fully qualified. Do you wish to continue? [n] n Aborting ... If you receive no response, contact Globus Simple CA at bacon@choateroot@cognito:~#
hostname
cognito
Uh-oh. Our hostname isn't fully qualified, which is going to cause us
trouble down the road. If you have this problem, there are several possible solutions.
One is to run the hostname command as root to set your FQDN as your hostname. Another
possibility is that your /etc/hosts
may have a short name listed for
your IP address. Let's see what the problem is on cognito:
root@cognito:~# cat /etc/hosts 127.0.0.1 localhost # The following lines are desirable for IPv6 capable hosts # (added automatically by netbase upgrade) ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts
That looks okay. On Debian, the hostname is stored in /etc/hostname
.
Let's see what it says:
root@cognito:~# cat /etc/hostname cognito
Ah, that's the problem. But this is not so bad, because a reverse-lookup of my IP address should return my FQDN, since it will be looked up in DNS:
root@cognito:~# host 140.221.8.109 109.8.221.140.in-addr.arpa domain name pointer cognito.mcs.anl.gov.
If the problem had been in /etc/hosts, I would have fixed it. Here's what a good /etc/hosts line would look like:
140.221.8.109 cognito.mcs.anl.gov cognito
Since reverse lookups work okay, I will just spell out the FQDN by hand in this cert request:
root@cognito:~# grid-cert-request -host cognito.mcs.anl.gov -force /etc/grid-security/hostcert_request.pem already exists /etc/grid-security/hostcert.pem already exists /etc/grid-security/hostkey.pem already exists ... Your certificate will be mailed to you within two working days. If you receive no response, contact Globus Simple CA at bacon@choate
The request already existed for "cognito", but the -force
overwrote that request with one for "cognito.mcs.anl.gov". Now I need to copy that back to choate and sign it:
root@cognito:~#
cat /etc/grid-security/hostcert_request.pem | mail globus@choate
Now I sign it as globus on choate. Remember, that's where I installed the SimpleCA, so that's where I sign it:
globus@choate:/tmp$
grid-ca-sign -in in.pem -out out.pem
To sign the request please enter the password for the CA key: The new signed certificate is at: /home/globus/.globus/simpleCA//newcerts/03.pemglobus@choate:/tmp$
cat /tmp/out.pem | mail root@cognito
Root checks his email, then saves the signed cert:
root@cognito:~#
cp out.pem /etc/grid-security/hostcert.pem
root@cognito:/etc/grid-security#
cp hostcert.pem containercert.pem
root@cognito:/etc/grid-security#
cp hostkey.pem containerkey.pem
root@cognito:/etc/grid-security#
chown globus:globus container*.pem
root@cognito:/etc/grid-security#
ls -l *.pem
-rw-r--r-- 1 globus globus 2711 2005-11-15 11:14 containercert.pem -r-------- 1 globus globus 887 2005-11-15 11:15 containerkey.pem -rw-r--r-- 1 root root 2711 2005-11-15 11:14 hostcert.pem -rw-r--r-- 1 root root 1405 2005-11-15 11:09 hostcert_request.pem -r-------- 1 root root 887 2005-11-15 11:09 hostkey.pem
There. Now cognito is setup with host and container certs, and it trusts the CA of my grid. The last step for root is to create a grid-mapfile for myself again:
root@cognito:/etc/grid-security#
vim grid-mapfile
root@cognito:/etc/grid-security#
cat grid-mapfile
"/O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon" bacon
Also, user bacon should get a local copy of the usercert:
cognito %
scp -r choate:.globus .
Password: usercert.pem 100% 895 0.9KB/s 00:00 usercert_request.pem 100% 1426 1.4KB/s 00:00 userkey.pem 100% 963 0.9KB/s 00:00
GridFTP setup on the second machine is identical to the first. I'll just list the commands here, see Section 2.4, “Set up GridFTP” for the file contents, or just copy them from the first machine.
root@cognito:/etc/grid-security#
vim /etc/xinetd.d/gridftp
root@cognito:/etc/grid-security#
vim /etc/services
root@cognito:/etc/grid-security#
/etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.
Now we can test it:
cognito %
setenv GLOBUS_LOCATION /usr/local/globus-4.0.1
cognito %
source $GLOBUS_LOCATION/etc/globus-user-env.csh
cognito %
grid-proxy-init -verify -debug
User Cert File: /home/bacon/.globus/usercert.pem User Key File: /home/bacon/.globus/userkey.pem Trusted CA Cert Dir: /etc/grid-security/certificates Output File: /tmp/x509up_u1817 Your identity: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon Enter GRID pass phrase for this identity: Creating proxy ...........++++++++++++ ........++++++++++++ Done Proxy Verify OK Your proxy is valid until: Tue Nov 15 23:33:37 2005cognito %
globus-url-copy gsiftp://cognito.mcs.anl.gov/etc/group \ gsiftp://choate.mcs.anl.gov/tmp/from-cognito
That was a slightly fancier test than I ran on choate. In this case, I did a third-party transfer between two GridFTP servers. It worked, so I have the local and remote security setup correctly.
Setting up the container on the second machine is a lot like the first. I'll list the commands here. See Section 2.5, “Starting the webservices container”, or you can just copy the files from the first machine. First globus creates the start-stop script:
globus@cognito:~$
vim $GLOBUS_LOCATION/start-stop
globus@cognito:~$
chmod +x $GLOBUS_LOCATION/start-stop
Then root creates an init.d script to call it:
root@cognito:~#
vim /etc/init.d/globus-4.0.1
root@cognito:~#
chmod +x /etc/init.d/globus-4.0.1
root@cognito:/etc/grid-security#
/etc/init.d/globus-4.0.1 start
Starting Globus container. PID: 17269
For a change of pace, we'll setup GRAM first on the second machine, even though we haven't got a working RFT locally. As with last time, we'll need to setup the sudoers. See Section 2.7, “Setting up WS GRAM” for the sudo contents, or copy the sudoers from the first machine. If you just copy the file, please make sure that you have sudo installed already, and that the permissions are 440.
root@cognito:/etc/grid-security#
visudo
Next, however, we'll change the GRAM RFT configuration, using the GRAM docs about setting up non-default configurations for GRAM. The only things we're changing right now are the "staging host" and "staging protocol" parameters:
globus@cognito:~$
$GLOBUS_LOCATION/setup/globus/setup-gram-service-common --staging-host=choate.mcs.anl.gov --staging-protocol=https
Running /usr/local/globus-4.0.1/setup/globus/setup-gram-service-common Determining system information... ... BUILD SUCCESSFUL Total time: 21 seconds
Restart the container:
root@cognito:/etc/grid-security#
/etc/init.d/globus-4.0.1 restart
Stopping Globus container. PID: 17269 Container stopped Starting Globus container. PID: 18069
Now we can submit a staging job:
cognito %
vim a.rsl
cognito %
cat a.rsl
cognito % cat a.rsl <job> <executable>my_echo</executable> <directory>${GLOBUS_USER_HOME}</directory> <argument>Hello</argument> <argument>World!</argument> <stdout>${GLOBUS_USER_HOME}/stdout</stdout> <stderr>${GLOBUS_USER_HOME}/stderr</stderr> <fileStageIn> <transfer> <sourceUrl>gsiftp://cognito.mcs.anl.gov:2811/bin/echo</sourceUrl> <destinationUrl>file:///${GLOBUS_USER_HOME}/my_echo</destinationUrl> </transfer> </fileStageIn> <fileCleanUp> <deletion> <file>file:///${GLOBUS_USER_HOME}/my_echo</file> </deletion> </fileCleanUp> </job>cognito %
globusrun-ws -submit -S -f a.rsl
Delegating user credentials...Done. Submitting job...Done. Job ID: uuid:6732f346-5604-11da-9951-0002b3882c16 Termination time: 11/16/2005 18:19 GMT Current job state: StageIn Current job state: Active Current job state: CleanUp Current job state: Done Destroying job...Done. Cleaning up any delegated credentials...Done.cognito %
cat ~/stdout
Hello World!cognito %
ls ~/my_echo
ls: /home/bacon/my_echo: No such file or directory
This is an example of a staging job. It copies the /bin/echo command from cognito to my home directory and names it my_echo. Then it runs it with some arguments, and captures the stderr/stdout. One of the neat features here is that it used the RFT service on choate to transfer the file via the GridFTP server on cognito. It's starting to look like a Grid!
You can get other examples of GRAM RSL files from GRAM usage scenarios.
Now that we have two machines, we can also setup some information services to monitor them together. Let's have cognito register its index service into choate so we can have an aggregated view of the two machines, as described at Building VOs in the MDS documentation:
globus@cognito:~$
vim /usr/local/globus-4.0.1/etc/globus_wsrf_mds_index/hierarchy.xml
globus@cognito:~$
grep upstream $GLOBUS_LOCATION/etc/globus_wsrf_mds_index/hierarchy.xml
<!-- <upstream> elements specify remote index services that the local index Set an upstream entry for each VO index that you wish to participate in. <upstream>https://choate.mcs.anl.gov:8443/wsrf/services/DefaultIndexService</upstream> root@cognito:~# /etc/init.d/globus-4.0.1 restart Stopping Globus container. PID: 18069 Container stopped Starting Globus container. PID: 18405
Now I can run some index service clients and check that the registration worked:
cognito %
setenv JAVA_HOME /usr/java/j2sdk1.4.2_10/
cognito %
setenv ANT_HOME /usr/local/apache-ant-1.6.5/
cognito %
setenv PATH $ANT_HOME/bin:$JAVA_HOME/bin:$PATH
cognito %
host cognito
cognito.mcs.anl.gov has address 140.221.8.109cognito %
wsrf-query -s https://choate.mcs.anl.gov:8443/wsrf/services/DefaultIndexService '/*' | grep 140.221.8.109 | wc -l
7
So we've got seven entries in the remote index that reference our machine. That means our upstream registration was processed successfully. But what do those entries look like? Here's an example:
<ns15:Address xmlns:ns15="http://schemas.xmlsoap.org/ws/2004/03/addressing"> https://140.221.8.109:8443/wsrf/services/ManagedJobFactoryService</ns15:Address>
It's hard to read, isn't it? That's an entry in choate that points to the WS GRAM service running on cognito that we just setup. But our life would be easier if we setup WebMDS to visualize the contents of the Index service. So let's do that next.
![]() | Note |
---|---|
Notice that I hadn't setup my java variables yet, but the GRAM client above worked just fine. That's because it's written in C, even though it interacts with the java container. Language neutrality is one of the features of webservices. |
WebMDS has a dependency on the Tomcat container, so we'll install that now. The recommended version is 5.0.28, which is available from the Apache Tomcat website. We're following the standard install instructions from the WebMDS Admin Guide.
root@cognito:/usr/local#
tar xzf jakarta-tomcat-5.0.28.tar.gz
root@cognito:/usr/local#
chown -R globus:globus jakarta-tomcat-5.0.28
Now the globus user can configure WebMDS:
globus@cognito:~$
vim $GLOBUS_LOCATION/lib/webmds/conf/indexinfo
globus@cognito:~$
grep choate /usr/local/globus-4.0.1/lib/webmds/conf/indexinfo
<value>https://choate.mcs.anl.gov:8443/wsrf/services/DefaultIndexService</value>globus@cognito:~$
export CATALINA_HOME=/usr/local/jakarta-tomcat-5.0.28
globus@cognito:~$
$GLOBUS_LOCATION/lib/webmds/bin/webmds-create-context-file \
$CATALINA_HOME/conf/Catalina/localhost
globus@cognito:~$
$CATALINA_HOME/bin/startup.sh
Using CATALINA_BASE: /usr/local/jakarta-tomcat-5.0.28 Using CATALINA_HOME: /usr/local/jakarta-tomcat-5.0.28 Using CATALINA_TMPDIR: /usr/local/jakarta-tomcat-5.0.28/temp Using JAVA_HOME: /usr/java/j2sdk1.4.2_10/
That started Tomcat on port 8080, so now I can browse to the /webmds directory on that port of my machine (http://cognito.mcs.anl.gov:8080/webmds/ but that's behind a firewall. You can visit your own machine, though). Now I can read the info stored in the index in human-readable format. For instance, I can see this:
RFT 140.221.8.31 0 active transfer resources, transferring 0 files. 26.06 KB transferred in 2 files since start of database.
Those two RFT transfers were the one I ran by hand in the RFT section, then the RFT transfer that happened because of my GRAM job that used file staging. I can also see some information about my GRAM services:
GRAM 140.221.8.109 1 queues, submitting to 0 cluster(s) of 0 host(s).
If I click for details, I get:
ComputingElement: Name: default UniqueID: default Info: TotalCPUs: 1
This works because the GRAM and RFT services are configured to register into the local service automatically. When we edited the hierarchy.xml file to point to choate, all the information started to be cached centrally.
When we setup our second machine, we copied the usercert over to the new machine because the systems did not share a home directory over NFS. There are other solutions for making proxy credentials available, and we'll use MyProxy to setup another way. First, we'll turn choate into a MyProxy server by following the instructions at configuring MyProxy. Note
that in 4.0.2 and later, myproxy-server.config
appears in
$GLOBUS_LOCATION/share/myproxy
instead of $GLOBUS_LOCATION/etc
.
root@choate:~#
export GLOBUS_LOCATION=/usr/local/globus-4.0.1/
root@choate:~#
cp $GLOBUS_LOCATION/etc/myproxy-server.config /etc
root@choate:~#
vim /etc/myproxy-server.config
root@choate:~#
diff /etc/myproxy-server.config $GLOBUS_LOCATION/etc/myproxy-server.config
15,21c15,21 < accepted_credentials "*" < authorized_retrievers "*" < default_retrievers "*" < authorized_renewers "*" < default_renewers "none" < authorized_key_retrievers "*" < default_key_retrievers "none" --- > #accepted_credentials "*" > #authorized_retrievers "*" > #default_retrievers "*" > #authorized_renewers "*" > #default_renewers "none" > #authorized_key_retrievers "*" > #default_key_retrievers "none"root@choate:~#
cat $GLOBUS_LOCATION/share/myproxy/etc.services.modifications >> /etc/services
root@choate:~#
tail /etc/services
binkp 24554/tcp # binkp fidonet protocol asp 27374/tcp # Address Search Protocol asp 27374/udp dircproxy 57000/tcp # Detachable IRC Proxy tfido 60177/tcp # fidonet EMSI over telnet fido 60179/tcp # fidonet EMSI over TCP # Local services gsiftp 2811/tcp myproxy-server 7512/tcp # Myproxy serverroot@choate:~#
cp $GLOBUS_LOCATION/share/myproxy/etc.xinetd.myproxy /etc/xinetd.d/myproxy
root@choate:~#
vim /etc/xinetd.d/myproxy
root@choate:~#
cat /etc/xinetd.d/myproxy
service myproxy-server { socket_type = stream protocol = tcp wait = no user = root server = /usr/local/globus-4.0.1/sbin/myproxy-server env = GLOBUS_LOCATION=/usr/local/globus-4.0.1 LD_LIBRARY_PATH=/usr/local/globus-4.0.1/libdisable = no }
root@choate:~#
/etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.root@choate:~#
netstat -an | grep 7512
tcp 0 0 0.0.0.0:7512 0.0.0.0:* LISTEN
![]() | Again, your system may require a different environment variable than LD_LIBRARY_PATH if you're using MacOS X or IRIX |
Now we can check the Myproxy User's Guide to see how to load up a credential and retrieve it remotely:
bacon@choate:~$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
![]()
bacon@choate:~$
source $GLOBUS_LOCATION/etc/globus-user-env.sh
bacon@choate:~$
grid-proxy-destroy
bacon@choate:~$
grid-proxy-info
ERROR: Couldn't find a valid proxy. Use -debug for further information.
I destroyed my proxy to keep you from being confused. For the rest of this, I'll be using MyProxy.
bacon@choate:~$
myproxy-init -s choate
Your identity: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon Enter GRID pass phrase for this identity:****
Creating proxy .............................................. Done Proxy Verify OK Your proxy is valid until: Wed Nov 23 09:48:55 2005 Enter MyProxy pass phrase:******
Verifying - Enter MyProxy pass phrase:******
A proxy valid for 168 hours (7.0 days) for user bacon now exists on choate.bacon@choate:~$
grid-proxy-info
ERROR: Couldn't find a valid proxy. Use -debug for further information.
So what happened? I just loaded a 7 day credential into the MyProxy server on choate. For the next seven days, I'll be able to create proxies from there using the password I supplied as the MyProxy pass phrase. I'll show you what it looks like from cognito:
bacon@cognito:~$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
bacon@cognito:~$
source $GLOBUS_LOCATION/etc/globus-user-env.sh
bacon@cognito:~$
myproxy-logon -s choate.mcs.anl.gov
Enter MyProxy pass phrase:******
A proxy has been received for user bacon in /tmp/x509up_u1817.bacon@cognito:~$
grid-proxy-info
subject : /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon/CN=1390227170/CN=2137426425/CN=87430171 issuer : /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon/CN=1390227170/CN=2137426425 identity : /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon type : Proxy draft (pre-RFC) compliant impersonation proxy strength : 512 bits path : /tmp/x509up_u1817 timeleft : 11:58:41
And that's how MyProxy works. It turns out that I didn't need to copy my usercert to cognito at all, because I could've stored it in the MyProxy server to begin with.
In this section I'll add a cluster to my environment. I happen to have a PBS cluster already, so I'll add it. The cluster has a headnode called lucky0, with compute nodes lucky1-lucky6. The node lucky2 is currently down due to faulty memory hardware. Here's what it looks like:
[bacon@lucky0 bacon]$
pbsnodes -a
lucky1.mcs.anl.gov state = free np = 2 ntype = cluster lucky3.mcs.anl.gov state = free np = 2 ntype = cluster lucky4.mcs.anl.gov state = free np = 2 ntype = cluster lucky5.mcs.anl.gov state = free np = 2 ntype = cluster lucky6.mcs.anl.gov state = free np = 2 ntype = cluster
The nodes share a file system called /home that is exported from lucky0:
[bacon@lucky1 bacon]$
df -h
Filesystem Size Used Avail Use% Mounted on /dev/cciss/c0d0p1 33G 1.9G 29G 7% / none 252M 0 252M 0% /dev/shm /dev/cciss/c0d1p1 167G 6.2G 153G 4% /sandbox lucky0:/home 101G 83G 13G 87% /home
The cluster is running the Ganglia monitoring system:
[bacon@lucky0 bacon]$
ps auxww | grep gmond
nobody 2004 0.0 0.1 52132 916 ? S Jun30 0:00 /usr/sbin/gmond bacon 19941 0.0 0.1 3700 584 pts/5 S 13:52 0:00 grep gmond
First, let's make sure we have the prereqs on lucky0.
[bacon@lucky0 bacon]$
echo $JAVA_HOME
/usr/java/j2sdk1.4.2_03[bacon@lucky0 bacon]$
echo $ANT_HOME
/home/software/apache-ant-1.6.1[bacon@lucky0 bacon]$
echo $PBS_HOME
[bacon@lucky0 bacon]$
ls /var/spool/pbs/
pbs_environment sched_priv server_name spool sched_logs server_logs server_priv undelivered[bacon@lucky0 bacon]$
which sudo
/usr/bin/sudo
We don't have a PBS_HOME
variable set, but because
we're using the default value of /var/spool/pbs
our
log files will be detected okay for GRAM.
Given that, I created a globus user on the cluster and created a directory for the installation:
[root@lucky0 /usr/local]#
id globus
uid=812(globus) gid=220(globus) groups=220(globus),564(globdev)[root@lucky0 /usr/local]#
mkdir globus-4.0.1
[root@lucky0 /usr/local]#
chown globus:globus globus-4.0.1
Lucky is running Fedora Core 1, so I'm installing from source. The
first new option here is --enable-wsgram-pbs
, which
I'm running because lucky already has PBS installed. This tells the
installer to build/install the PBS GRAM scheduler adapter. There are
options for LSF and Condor already in the installer, and an SGE adapter
is available elsewhere.
[globus@lucky0 globus]$
tar xzf gt4.0.1-all-source-installer.tar.gz
[globus@lucky0 globus]$
cd gt4.0.1-all-source-installer
[globus@lucky0 gt4.0.1-all-source-installer]$
./configure --prefix=/usr/local/globus-4.0.1 --enable-wsgram-pbs
checking build system type... i686-pc-linux-gnu checking for javac... /usr/java/j2sdk1.4.2_03/bin/javac checking for ant... /home/software/apache-ant-1.6.1/bin/ant configure: creating ./config.status config.status: creating Makefile[globus@lucky0 gt4.0.1-all-source-installer]$
make 2>&1 | tee installer.log
cd gpt-3.2autotools2004 && OBJECT_MODE=32 ./build_gpt ... echo "Your build completed successfully. Please run make install." Your build completed successfully. Please run make install.[globus@lucky0 gt4.0.1-all-source-installer]$
make install
/usr/local/globus-4.0.1/sbin/gpt-postinstall ... running /usr/local/globus-4.0.1/setup/globus/setup-globus-job-manager-pbs..[ Changing to /usr/local/globus-4.0.1/setup/globus ] find-pbs-tools: WARNING: "Cannot locate mpiexec"find-pbs-tools: WARNING: "Cannot locate mpirun" checking for mpiexec... no checking for mpirun... no checking for qdel... /home/software/openpbs-2.3.16/bin/qdel checking for qstat... /home/software/openpbs-2.3.16/bin/qstat checking for qsub... /home/software/openpbs-2.3.16/bin/qsub checking for ssh... /usr/bin/ssh find-pbs-tools: creating ./config.status config.status: creating /usr/local/globus-4.0.1/lib/perl/Globus/GRAM/JobManager/pbs.pm ..Done
For a change of pace, I'm not going to use my SimpleCA on this cluster. Instead, I'm going to get a certificate from a real production CA. If you don't have one of those, just install the SimpleCA setup package and get a hostcert like we did in Section 3.3, “Setting up your second machine: Security”. I just thought it might be interesting to show you how to use a production CA and how to combine resources that have IDs issued from multiple CAs.
In this example I'll be using the DOEGrids CA. Please note that there are eligibility requirements to use this CA. I am eligible because I work at Argonne National Laboratory. If you're looking for a production CA, you might want to check the list at the Terena Academic CA Repository.
First, I need to install the CA certificates for the DOE Grids CA:
[bacon@lucky0 bacon]$
ls -l /etc/grid-security/certificates/
-rw-r--r-- 1 12035 106 1436 May 2 2003 1c3f2ca8.0 -rw-r--r-- 1 12035 106 2114 May 27 2003 1c3f2ca8.signing_policy -rw-r--r-- 1 12035 106 953 May 2 2003 6349a761.0 -rw-r--r-- 1 12035 106 1940 May 27 2003 6349a761.signing_policy -rw-r--r-- 1 12035 106 1679 May 2 2003 9d8753eb.0 -rw-r--r-- 1 12035 106 1717 May 27 2003 9d8753eb.signing_policy -rw-r--r-- 1 12035 106 1448 May 6 2003 d1b603c3.0 -rw-r--r-- 1 12035 106 2089 May 27 2003 d1b603c3.signing_policy -rw-r--r-- 1 12035 106 4082 May 13 2003 globus-host-ssl.conf.1c3f2ca8 -rw-r--r-- 1 12035 106 4081 May 13 2003 globus-user-ssl.conf.1c3f2ca8 -rw-r--r-- 1 12035 106 1743 May 13 2003 grid-security.conf.1c3f2ca8
The DOE Grids CA has a "Globus Support" package I installed, which came
with these certificates, signing policies, and request files. Using them,
I request a hostcert for lucky0 and copy it into /etc/grid-security
, and create a containercert copy:
[root@lucky0 /etc/grid-security]$
ls -l host* container*
-rw-r--r-- 1 globus globus 1181 Feb 21 2005 containercert.pem -r-------- 1 globus globus 891 Feb 21 2005 containerkey.pem -rw-r--r-- 1 root root 1181 Feb 21 2005 hostcert.pem -rw-r--r-- 1 root root 1337 Feb 18 2005 hostcert_request.pem -r-------- 1 root root 891 Feb 18 2005 hostkey.pem[root@lucky0 /etc/grid-security]$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
[root@lucky0 /etc/grid-security]$
source $GLOBUS_LOCATION/etc/globus-user-env.sh
[root@lucky0 ~]$
grid-cert-info -file /etc/grid-security/hostcert.pem -subject
/DC=org/DC=doegrids/OU=Services/CN=host/lucky0.mcs.anl.gov
I also got a usercert from the DOEGrids CA:
[bacon@lucky0 bacon]$
ls -l .globus/total 8
-rw-r--r-- 1 bacon globdev 1600 Apr 4 2005 usercert.pem -rw------- 1 bacon globdev 1920 Apr 4 2005 userkey.pem[bacon@lucky0 bacon]$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
[bacon@lucky0 bacon]$
source $GLOBUS_LOCATION/etc/globus-user-env.sh
[bacon@lucky0 bacon]$
grid-cert-info -subject
/DC=org/DC=doegrids/OU=People/CN=Charles Bacon 332900
Because my subject name is different on this machine, the grid-mapfile
looks a little different too:
[root@lucky0 ~]$
vim /etc/grid-security/grid-mapfile
[root@lucky0 ~]$
grep bacon /etc/grid-security/grid-mapfile
"/DC=org/DC=doegrids/OU=People/CN=Charles Bacon 332900" bacon
This cluster doesn't have any special storage nodes, so we'll just setup GridFTP on the head-node lucky0:
root@lucky0 /etc/grid-security#
vim /etc/xinetd.d/gridftp
root@lucky0 /etc/grid-security#
vim /etc/services
root@lucky0 /etc/grid-security#
/etc/init.d/xinetd reload
Reloading internet superserver configuration: xinetd.[root@lucky0 ~]$
netstat -an | grep 2811
tcp 0 0 0.0.0.0:2811 0.0.0.0:* LISTEN
Now user bacon can run a test transfer:
[bacon@lucky0 bacon]$
globus-url-copy gsiftp://lucky0.mcs.anl.gov/etc/group gsiftp://lucky0.mcs.anl.gov/tmp/newtest
error: globus_ftp_control: gss_init_sec_context failed globus_gsi_gssapi: Error with gss credential handle globus_credential: Valid credentials could not be found in any of the possible locations specified by the credential search order. Valid credentials could not be found in any of the possible locations specified by the credential search order. Attempt 1 globus_credential: Error reading host credential globus_sysconfig: Error with certificate filename globus_sysconfig: Error with certificate filename globus_sysconfig: File is not owned by current user: /etc/grid-security/hostcert.pem is not owned by current user Attempt 2 globus_credential: Error reading proxy credential globus_sysconfig: Could not find a valid proxy certificate file location globus_sysconfig: Error with key filename globus_sysconfig: File does not exist: /tmp/x509up_u1817 is not a valid file Attempt 3 globus_credential: Error reading user credential globus_credential: Key is password protected: GSI does not currently support password protected private keys. OpenSSL Error: pem_lib.c:401: in library: PEM routines, function PEM_do_header: bad password read
Whoops, forgot to create a proxy. Let's try again:
[bacon@lucky0 bacon]$
grid-proxy-init
Your identity: /DC=org/DC=doegrids/OU=People/CN=Charles Bacon 332900 Enter GRID pass phrase for this identity:********
Creating proxy .................................... Done Your proxy is valid until: Wed Nov 23 22:21:49 2005[bacon@lucky0 bacon]$
globus-url-copy gsiftp://lucky0.mcs.anl.gov/etc/group gsiftp://lucky0.mcs.anl.gov/tmp/newtest
[bacon@lucky0 bacon]$
diff /tmp/newtest /etc/group
[bacon@lucky0 bacon]$
Much better. Looks like GridFTP works on the new machine.
I'm going to setup the init.d scripts for the container on the cluster now:
[globus@lucky0 globus]$
vim /usr/local/globus-4.0.1/start-stop
[globus@lucky0 globus]$
cat /usr/local/globus-4.0.1/start-stop
#! /bin/sh set -e export GLOBUS_LOCATION=/usr/local/globus-4.0.1 export JAVA_HOME=/usr/java/j2sdk1.4.2_03/ export ANT_HOME=/home/software/apache-ant-1.6.1 export GLOBUS_OPTIONS="-Xms256M -Xmx512M" . $GLOBUS_LOCATION/etc/globus-user-env.sh cd $GLOBUS_LOCATION case "$1" in start) $GLOBUS_LOCATION/sbin/globus-start-container-detached -p 8443 ;; stop) $GLOBUS_LOCATION/sbin/globus-stop-container-detached ;; *) echo "Usage: globus {start|stop}" >&2 exit 1 ;; esac exit 0[globus@lucky0 globus]$
chmod +x /usr/local/globus-4.0.1/start-stop
Notice that the JAVA_HOME and ANT_HOME are different because I'm using different versions on this machine than the other machines.
Now, as root, we create the init.d script:
[root@lucky0 ~]$
vim /etc/init.d/globus-4.0.1
[root@lucky0 ~]$
chmod +x /etc/init.d/globus-4.0.1
[root@lucky0 ~]$
/etc/init.d/globus-4.0.1 start
Starting Globus container. PID: 15388
This init script looks the same because I'm using the same directory on this machine as I was using on the other machines. It looks like the container started up alright, so we'll move on and configure RFT.
I'd like to setup a second RFT server. I could create a new database on choate and use that, but I'd rather let the server run its own DB.
[root@lucky0 ~]#
yum install postgresql
Gathering header information file(s) from server(s) Server: Fedora Core 1 - i386 - Base Server: Fedora Core 1 - i386 - Released Updates Finding updated packages Downloading needed headers Resolving dependencies Dependencies resolved I will do the following: [install: postgresql 7.3.4-11.i386] Is this ok [y/N]:y
Getting postgresql-7.3.4-11.i386.rpm postgresql-7.3.4-11.i386. 100% |=========================| 1.6 MB 00:01 Running test transaction: Test transaction complete, Success! postgresql 100 % done 1/1 Installed: postgresql 7.3.4-11.i386 Transaction(s) Complete[12 10:41 root@lucky0:~]#
yum install postgresql-server
Gathering header information file(s) from server(s) Server: Fedora Core 1 - i386 - Base Server: Fedora Core 1 - i386 - Released Updates Finding updated packages Downloading needed headers Resolving dependencies Dependencies resolved I will do the following: [install: postgresql-server 7.3.4-11.i386] Is this ok [y/N]:y
Getting postgresql-server-7.3.4-11.i386.rpm postgresql-server-7.3.4-1 100% |=========================| 2.6 MB 00:02 Running test transaction: Test transaction complete, Success! postgresql-server 100 % done 1/1 Installed: postgresql-server 7.3.4-11.i386 Transaction(s) Complete
Now I can edit the config files and create a globus postgres user:
[root@lucky0 ~]$
/etc/init.d/postgresql start
Initializing database: [ OK ] Starting postgresql service: [ OK ][root@lucky0 ~]$
vim /var/lib/pgsql/data/postgresql.conf.default
[root@lucky0 ~]$
grep tcpip /var/lib/pgsql/data/postgresql.conf.default
tcpip_socket = true[root@lucky0 ~]$
cp /usr/share/pgsql/pg_hba.conf.sample /var/lib/pgsql/data/pg_hba.conf
[root@lucky0 ~]$
vim /var/lib/pgsql/data/pg_hba.conf
[root@lucky0 ~]$
tail -1 /var/lib/pgsql/data/pg_hba.conf
host rftDatabase "globus" "140.221.65.193" 255.255.255.255 md5[root@lucky0 ~]$
/etc/init.d/postgresql restart
Stopping postgresql service: [ OK ] Starting postgresql service: [ OK ][root@lucky0 ~]$
netstat -an | grep 5432
tcp 0 0 0.0.0.0:5432 0.0.0.0:* LISTEN unix 2 [ ACC ] STREAM LISTENING 27017998 /tmp/.s.PGSQL.5432[root@lucky0 ~]$
su postgres -c "createuser -P globus"
bash: /root/.bashrc: Permission denied Enter password for user "globus":******
Enter it again:******
Shall the new user be allowed to create databases? (y/n) y Shall the new user be allowed to create more new users? (y/n) n CREATE USER
For the netstat line, if you only see the unix socket, you haven't setup TCP/IP connections correctly.
As you can see, the FC1 RPM for postgresql had a slightly different setup than the Debian .deb we used earlier, but the basic points were the same. Now we can create the RFT database as the globus user and update the password in the RFT config file:
[globus@lucky0 globus]$
createdb rftDatabase
CREATE DATABASE[globus@lucky0 globus]$
psql -d rftDatabase -f /usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql
psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:6: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'requestid_pkey' for table 'requestid' CREATE TABLE psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:11: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'transferid_pkey' for table 'transferid' CREATE TABLE psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:30: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'request_pkey' for table 'request' CREATE TABLE psql:/usr/local/globus-4.0.1/share/globus_wsrf_rft/rft_schema.sql:65: NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index 'transfer_pkey' for table 'transfer' CREATE TABLE CREATE TABLE CREATE TABLE CREATE INDEX[globus@lucky0 globus]$
vim /usr/local/globus-4.0.1/etc/globus_wsrf_rft/jndi-config.xml
Now we can restart the container, and we shouldn't see any RFT warnings:
[root@lucky0 ~]$
/etc/init.d/globus-4.0.1 restart
Stopping Globus container. PID: 15388 Container stopped Starting Globus container. PID: 15774[root@lucky0 ~]$
head /usr/local/globus-4.0.1/var/container.log
Starting SOAP server at: https://140.221.65.193:8443/wsrf/services/ With the following services: [1]: https://140.221.65.193:8443/wsrf/services/TriggerFactoryService [2]: https://140.221.65.193:8443/wsrf/services/DelegationTestService [3]: https://140.221.65.193:8443/wsrf/services/SecureCounterService [4]: https://140.221.65.193:8443/wsrf/services/IndexServiceEntry [5]: https://140.221.65.193:8443/wsrf/services/DelegationService [6]: https://140.221.65.193:8443/wsrf/services/InMemoryServiceGroupFactory [7]: https://140.221.65.193:8443/wsrf/services/mds/test/execsource/IndexService
Looks good.
Our GRAM configuration needs a few extra steps now that we're trying to use a scheduler. First, we'll need the sudoers like last time:
[root@lucky0 ~]$ visudo globus ALL=(bacon) NOPASSWD: /usr/local/globus-4.0.1/libexec/globus-gridmap-and-execute -g /etc/grid-security/grid-mapfile /usr/local/globus-4.0.1/libexec/globus-job-manager-script.pl * globus ALL=(bacon) NOPASSWD: /usr/local/globus-4.0.1/libexec/globus-gridmap-and-execute -g /etc/grid-security/grid-mapfile /usr/local/globus-4.0.1/libexec/globus-gram-local-proxy-tool *
Let's make sure we can submit a test job to the fork-run GRAM first:
[bacon@lucky0 bacon]$
globusrun-ws -submit -s -c /bin/date
Delegating user credentials...Done. Submitting job...Done. Job ID: uuid:9ff8ce66-5c45-11da-8577-0002a5ad41e5 Termination time: 11/24/2005 17:21 GMT Current job state: Active Current job state: CleanUp-Hold Wed Nov 23 11:21:52 CST 2005 Current job state: CleanUp Current job state: Done Destroying job...Done. Cleaning up any delegated credentials... Done.
Okay, so far so good. Let's also make sure that PBS is working without GRAM:
[bacon@lucky0 bacon]$ vim mysub #!/bin/sh /bin/hostname [bacon@lucky0 bacon]$ chmod +x mysub [bacon@lucky0 bacon]$ qsub mysub 4217.lucky0.mcs.anl.gov [bacon@lucky0 bacon]$ cat mysub.o4217 lucky1.mcs.anl.gov
As you can see, the PBS job was submitted, then ran on node lucky1. The output was placed in my home directory. Since the PBS server works, let's get the GRAM interface to it properly configured.
For the notification system in GRAM to work, the globus user will need access
to the scheduler logs. The location of the logs is kept in
$GLOBUS_LOCATION/etc/globus-pbs.conf
. We should verify that these are readable:
[globus@lucky0 etc]$
cat globus-pbs.conf
log_path=/var/spool/pbs/server_logs[globus@lucky0 etc]$
ls -l /var/spool/pbs/server_logs
-rw-r--r-- 1 root root 350 Jul 21 11:20 20050708 -rw-r--r-- 1 root root 2786 Jul 22 12:42 20050721 -rw-r--r-- 1 root root 241232 Jul 23 10:16 20050722 -rw-r--r-- 1 root root 30295 Jul 25 15:55 20050723 -rw-r--r-- 1 root root 53939 Sep 14 18:07 20050725 -rw-r--r-- 1 root root 2055 Oct 25 15:46 20050914 -rw-r--r-- 1 root root 4775 Oct 26 09:35 20051025 -rw-r--r-- 1 root root 10443 Nov 1 10:48 20051026 -rw-r--r-- 1 root root 5016 Nov 23 11:23 20051101 -rw-r--r-- 1 root root 10298 Nov 23 12:04 20051123
As you can see, the logs are readable, so we will be okay.
The one thing we might need to do now is create a filesystem mapping. Lots of
clusters have storage nodes that have a different view of the filesystem than the compute nodes. For instance, a storage node might see /exports/home
for the filesystem called /home
on the cluster.
Our cluster isn't that complicated, since lucky0 has the /home
system mounted as /home
already. Therefore, we get to use the trivial filesystem map that performs no translations:
[globus@lucky0 globus]$
tail -11 /usr/local/globus-4.0.1/etc/gram-service/globus_gram_fs_map_config.xml
<ns1:scheduler xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="xsd:string">PBS</ns1:scheduler> <ns1:ftpServer xsi:type="ns1:FtpServerType"> <ns1:protocol xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="xsd:string">gsiftp</ns1:protocol> <ns1:host xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="xsd:string">lucky0.mcs.anl.gov</ns1:host> <ns1:port xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="xsd:unsignedShort">2811</ns1:port> </ns1:ftpServer> <ns1:mapping xsi:type="ns1:FileSystemPathMappingType"> <ns1:jobPath xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="xsd:string">/</ns1:jobPath> <ns1:ftpPath xmlns:xsd="http://www.w3.org/2001/XMLSchema" xsi:type="xsd:string">/</ns1:ftpPath> </ns1:mapping>
More information on the filesystem map is available from the GRAM Admin Guide.
So let's run some jobs through PBS:
[bacon@lucky0 bacon]$
vim a.rsl
[bacon@lucky0 bacon]$
cat a.rsl
<job> <executable>my_echo</executable> <directory>${GLOBUS_USER_HOME}</directory> <argument>Hello</argument> <argument>World!</argument> <stdout>${GLOBUS_USER_HOME}/stdout</stdout> <stderr>${GLOBUS_USER_HOME}/stderr</stderr> <fileStageIn> <transfer> <sourceUrl>gsiftp://lucky0.mcs.anl.gov:2811/bin/echo</sourceUrl> <destinationUrl>file:///${GLOBUS_USER_HOME}/my_echo</destinationUrl> </transfer> </fileStageIn> <fileCleanUp> <deletion> <file>file:///${GLOBUS_USER_HOME}/my_echo</file> </deletion> </fileCleanUp> </job>[bacon@lucky0 bacon]$
globusrun-ws -Ft PBS -submit -S -f a.rsl
Delegating user credentials...Done. Submitting job...Done. Job ID: uuid:fcc1cc9e-5c48-11da-be54-0002a5ad41e5 Termination time: 11/24/2005 17:45 GMT Current job state: StageIn Current job state: Pending Current job state: Active Current job state: CleanUp Current job state: Done Destroying job...Done. Cleaning up any delegated credentials...Done.[bacon@lucky0 bacon]$
cat stdout
[bacon@lucky0 bacon]$
cat stderr
Permission denied, please try again. Permission denied, please try again. Permission denied (publickey,password,keyboard-interactive).
Oh, I see. GRAM is trying to scp back my results, but the lucky cluster is configured to use rsh between nodes.
A quick command will fix that:
[globus@lucky0 globus-4.0.1]$
export GLOBUS_LOCATION=/usr/local/globus-4.0.1
[globus@lucky0 globus-4.0.1]$
$GLOBUS_LOCATION/setup/globus/setup-globus-job-manager-pbs --remote-shell=rsh
Error locating PBS commands, aborting!
How's that? Oh, it turns out the script wants to be run from the $GLOBUS_LOCATION/setup/globus
directory so it can run some other tools:
[globus@lucky0 globus-4.0.1]$
cd $GLOBUS_LOCATION/setup/globus
[globus@lucky0 globus]$
./setup-globus-job-manager-pbs --remote-shell=rsh
find-pbs-tools: WARNING: "Cannot locate mpiexec" find-pbs-tools: WARNING: "Cannot locate mpirun" checking for mpiexec... no checking for mpirun... no checking for qdel... /home/software/openpbs-2.3.16/bin/qdel checking for qstat... /home/software/openpbs-2.3.16/bin/qstat checking for qsub... /home/software/openpbs-2.3.16/bin/qsub checking for rsh... /usr/bin/rsh find-pbs-tools: creating ./config.status config.status: creating /usr/local/globus-4.0.1/lib/perl/Globus/GRAM/JobManager/pbs.pm
Okay, let's try our job submission again:
[bacon@lucky0 bacon]$
rm stdout stderr
[bacon@lucky0 bacon]$
globusrun-ws -Ft PBS -submit -S -f a.rsl
Delegating user credentials...Done. Submitting job...Done. Job ID: uuid:63b064dc-5c4a-11da-804c-0002a5ad41e5 Termination time: 11/24/2005 17:55 GMT Current job state: StageIn Current job state: Pending Current job state: Active Current job state: CleanUp Current job state: Done Destroying job...Done. Cleaning up any delegated credentials...Done.[bacon@lucky0 bacon]$
cat stdout
Hello World!
Success!
In this section I'm going to configure the Ganglia providers for
the Index Service. This will let us do cluster monitoring through
MDS. Basically I just need to edit the file $GLOBUS_LOCATION/etc/globus_wsrf_mds_usefulrp/gluerp.xml
:
[globus@lucky0 etc]$
vim $GLOBUS_LOCATION/etc/globus_wsrf_mds_usefulrp/gluerp.xml
[globus@lucky0 etc]$
head $GLOBUS_LOCATION/etc/globus_wsrf_mds_usefulrp/gluerp.xml
<config xmlns="http://mds.globus.org/2004/10/gluerp-config"> <defaultProvider>java org.globus.mds.usefulrp.glue.GangliaElementProducer</defaultProvider> <!-- To enable the use of ganglia to provide cluster information, replace the above with the following:
Now we restart the container:
[root@lucky0 ~]$
/etc/init.d/globus-4.0.1 restart
Stopping Globus container. PID: 15774 Container stopped Starting Globus container. PID: 18683
Now I get GLUE Schema information about my cluster in the wsrf-query output:
[bacon@lucky0 bacon]$
wsrf-query -s https://140.221.65.193:8443/wsrf/services/DefaultIndexService '/*' | grep GLUE
aggregated, which in this case is the GLUE cluster <ns11:ResourcePropertyName>glue:GLUECE</ns11:ResourcePropertyName> <ns1:GLUECE xmlns:ns1="http://mds.globus.org/glue/ce/1.1"> </ns1:GLUECE> aggregated, which in this case is the GLUE cluster <ns11:ResourcePropertyName>glue:GLUECE</ns11:ResourcePropertyName> <ns1:GLUECE xmlns:ns1="http://mds.globus.org/glue/ce/1.1"> </ns1:GLUECE> aggregated, which in this case is the GLUE cluster <ns11:ResourcePropertyName>glue:GLUECE</ns11:ResourcePropertyName> <ns1:GLUECE xmlns:ns1="http://mds.globus.org/glue/ce/1.1"> </ns1:GLUECE>
I'd really like to see this in WebMDS, though, which is running on cognito. That probably means it's time to establish cross-CA trust between lucky (using DOE Grids CA) and choate/cognito (using my SimpleCA). That's probably worth a new section.
It's actually not that hard to add trust between these two environments. First, let's get lucky to trust choate's SimpleCA. That's easy enough, because we have the setup package we can install. I copy it over from choate, then run:
[globus@lucky0 globus]$
$GLOBUS_LOCATION/sbin/gpt-build /tmp/globus_simple_ca_ebb88ce5_setup-0.18.tar.gz
gpt-build ====> CHECKING BUILD DEPENDENCIES FOR globus_simple_ca_ebb88ce5_setup gpt-build ====> Changing to /home/globus/BUILD/globus_simple_ca_ebb88ce5_setup-0.18/ gpt-build ====> BUILDING globus_simple_ca_ebb88ce5_setup gpt-build ====> Changing to /home/globus/BUILD gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-data gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-dev gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-doc gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-pgm_static gpt-build ====> REMOVING empty package globus_simple_ca_ebb88ce5_setup-noflavor-rtl[globus@lucky0 globus]$
$GLOBUS_LOCATION/sbin/gpt-postinstall
running /usr/local/globus-4.0.1/setup/globus/./setup-ssl-utils.ebb88ce5..[ Changing to /usr/local/globus-4.0.1/setup/globus/. ] setup-ssl-utils: Configuring ssl-utils package Running setup-ssl-utils-sh-scripts... *************************************************************************** Note: To complete setup of the GSI software you need to run the following script as root to configure your security configuration directory: /usr/local/globus-4.0.1/setup/globus_simple_ca_ebb88ce5_setup/setup-gsi For further information on using the setup-gsi script, use the -help option. The -default option sets this security configuration to be the default, and -nonroot can be used on systems where root access is not available. *************************************************************************** setup-ssl-utils: Complete ..Done WARNING: The following packages were not set up correctly: globus_simple_ca_ebb88ce5_setup-noflavor-pgm Check the package documentation or run postinstall -verbose to see what happened
Then, as root, I run setup-gsi, but leave off the -default
, because I don't want to switch CAs:
[root@lucky0 ~]$
$GLOBUS_LOCATION/setup/globus_simple_ca_ebb88ce5_setup/setup-gsi
setup-gsi: Configuring GSI security Installing /etc/grid-security/certificates//grid-security.conf.ebb88ce5... Running grid-security-config... Installing Globus CA certificate into trusted CA certificate directory... Installing Globus CA signing policy into trusted CA certificate directory... WARNING: Can't match the previously installed GSI configuration files to a CA certificate. For the configuration files ending in "00000000" located in /etc/grid-security/certificates/, change the "00000000" extension to the hash of the correct CA certificate. setup-gsi: Complete
The warning is harmless, it's just unsure about the DOE Grids CA configuration files in the /etc/grid-security/certificates
directory. The important part is that the SimpleCA files are now in place:
[root@lucky0 ~]$
ls -l /etc/grid-security/certificates/*ebb88ce5*
-rw-r--r-- 1 root root 936 Nov 23 14:00 /etc/grid-security/certificates/ebb88ce5.0 -rw-r--r-- 1 root root 1353 Nov 23 14:00 /etc/grid-security/certificates/ebb88ce5.signing_policy -rw-r--r-- 1 root root 2670 Nov 23 14:00 /etc/grid-security/certificates/globus-host-ssl.conf.ebb88ce5 -rw-r--r-- 1 root root 2781 Nov 23 14:00 /etc/grid-security/certificates/globus-user-ssl.conf.ebb88ce5 -rw-r--r-- 1 root root 1387 Nov 23 14:00 /etc/grid-security/certificates/grid-security.conf.ebb88ce5[root@lucky0 ~]$
grid-default-ca -list
The available CA configurations installed on this host are: Directory: /etc/grid-security/certificates 1) 1c3f2ca8 - /DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1 2) 42864e48 - /C=US/O=Globus/CN=Globus Certification Authority 3) 6349a761 - /O=DOE Science Grid/OU=Certificate Authorities/CN=Certificate Manager 4) 9d8753eb - /DC=net/DC=es/OU=Certificate Authorities/OU=DOE Science Grid/CN=pki1 5) d1b603c3 - /DC=net/DC=ES/O=ESnet/OU=Certificate Authorities/CN=ESnet Root CA 1 6) ebb88ce5 - /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/CN=Globus Simple CA The default CA is: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/CN=Globus Simple CA Location: /etc/grid-security/certificates/ebb88ce5.0
Hmm. The default did switch to the new SimpleCA. I will switch it back:
[root@lucky0 ~]$
grid-default-ca
The available CA configurations installed on this host are: Directory: /etc/grid-security/certificates 1) 1c3f2ca8 - /DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1 2) 42864e48 - /C=US/O=Globus/CN=Globus Certification Authority 3) 6349a761 - /O=DOE Science Grid/OU=Certificate Authorities/CN=Certificate Manager 4) 9d8753eb - /DC=net/DC=es/OU=Certificate Authorities/OU=DOE Science Grid/CN=pki1 5) d1b603c3 - /DC=net/DC=ES/O=ESnet/OU=Certificate Authorities/CN=ESnet Root CA 1 6) ebb88ce5 - /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/CN=Globus Simple CA The default CA is: /O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/CN=Globus Simple CA Location: /etc/grid-security/certificates/ebb88ce5.0 Enter the index number of the CA to set as the default:1
setting the default CA to: /DC=org/DC=DOEGrids/OU=Certificate Authorities/CN=DOEGrids CA 1 linking /etc/grid-security/certificates//grid-security.conf.1c3f2ca8 to /etc/grid-security/grid-security.conf linking /etc/grid-security/certificates//globus-host-ssl.conf.1c3f2ca8 to /etc/grid-security/globus-host-ssl.conf linking /etc/grid-security/certificates//globus-user-ssl.conf.1c3f2ca8 to /etc/grid-security/globus-user-ssl.conf ...done.
Now on choate and cognito, I need a copy of the DOE Grids certificates. I will just copy them into place:
root@choate:/etc/grid-security/certificates#
scp lucky0:/etc/grid-security/certificates/\*.0 .
root@lucky0's password:********
1c3f2ca8.0 100% 1436 1.4KB/s 00:00 42864e48.0 100% 806 0.8KB/s 00:00 6349a761.0 100% 953 0.9KB/s 00:00 9d8753eb.0 100% 1679 1.6KB/s 00:00 d1b603c3.0 100% 1448 1.4KB/s 00:00 ebb88ce5.0 100% 936 0.9KB/s 00:00root@choate:/etc/grid-security/certificates#
scp lucky0:/etc/grid-security/certificates/\*.signing_policy .
root@lucky0's password:********
1c3f2ca8.signing_policy 100% 2114 2.1KB/s 00:00 42864e48.signing_policy 100% 1329 1.3KB/s 00:00 6349a761.signing_policy 100% 1940 1.9KB/s 00:00 9d8753eb.signing_policy 100% 1717 1.7KB/s 00:00 d1b603c3.signing_policy 100% 2089 2.0KB/s 00:00 ebb88ce5.signing_policy 100% 1353 1.3KB/s 00:00
Of course, I do the same on cognito also.
Next up is to modify the grid-mapfiles so that my user can use either identity to the machines. First, lucky:
[root@lucky0 ~]$
vim /etc/grid-security/grid-mapfile
[root@lucky0 ~]$
grep bacon /etc/grid-security/grid-mapfile
"/DC=org/DC=doegrids/OU=People/CN=Charles Bacon 332900" bacon "/O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon" bacon
Then, for choate and cognito:
root@choate:/etc/grid-security/certificates#
vim /etc/grid-security/grid-mapfile
root@choate:/etc/grid-security/certificates#
cat /etc/grid-security/grid-mapfile
"/O=Grid/OU=GlobusTest/OU=simpleCA-choate.mcs.anl.gov/OU=mcs.anl.gov/CN=Charles Bacon" bacon "/DC=org/DC=doegrids/OU=People/CN=Charles Bacon 332900" bacon
As a test, I should be able to run a job on choate from lucky:
[bacon@lucky0 bacon]$
globusrun-ws -F choate.mcs.anl.gov -submit -s -c /bin/hostname
Delegating user credentials...Done. Submitting job...Done. Job ID: uuid:84d1c372-5c5e-11da-a975-0002a5ad41e5 Termination time: 11/24/2005 20:19 GMT Current job state: Active Current job state: CleanUp-Hold choate.mcs.anl.gov Current job state: CleanUp Current job state: Done Destroying job...Done. Cleaning up any delegated credentials...Done.
Works like a champ!
Congratulations! You've set up a couple services, and have the infrastructure to do more things. Here are some resources that might help you find your next steps:
- Master list of Globus 4.0 documentation - has documentation about any of the components you might be interested in, but you will need to know already what you're interested in. They all have User Guides that will help you learn about the clients, Admin Guides that will help you configure the services, and Developer's Guides to help you learn about the code.
- List of other grid software - starts with two sections describing the role of the Toolkit in Grid Computing, followed by links out to many domain-specific pieces of software that build on top of the Toolkit.
- List of organizations, publications, and news - Find more research papers, books, press releases and such.
- List of support mailing lists - Find the domain-specific mailing list to get help. [email protected] is the right place to ask questions if you had trouble with the quickstart.
- List of CVS development tools - Browse our CVS repository, and learn how to check out the latest code.