Administrator Guide
This Guide covers the installation, configuration, and maintenance of gCore.
Contents
Prerequisites
The following software is a prerequisite for the installation of gCore:
- A platform compatible or made compatible with GT requirements.
- J2SE 1.5.08 SDK or greater. Sun's reference implementation is recommended, but versions from IBM, HP, or BEA should work equally well.
- Ant 1.6.5+ to build gCF sources or to develop services with it.
- a SVN client to install gCore from the SVN repository.
- GNU tar to install gCore from archived distributions.
- sudo to execute shell commands with controlled super-user privileges.
Running gCore in a secure infrastructure raises further prerequisites:
- A ntp server to synchronise your clock with other machines' for correct credential validation.
- [coming soon]
Finally, at least static IP address (if not a DNS name) is needed for all but the simplest testing scenarios.
Installation
gCore may be installed from a SVN repository, or else from pre-packaged archives.
In the first case, installing gCore is tantamount to downloading it into a directory of choice, the gCore location. In the second case, installing gCore is simply matter of expanding the downloaded archive into the the gCore location. In either case, proceed to the installation as a a non-privileged user with read and write permissions for the gCore location.
At the end of the process, the gCore location should contain the following structure:
|-bin | |-config | |-endorsed | |-etc | |-lib | |-libexec | |-share
Some folders are of immediate interest to administrators and developers alike:
bin | executables |
config | gHN configuration files |
etc | container and deployed service configuration files |
lib | standard and deployed service libraries |
share | build tools, standard and deployed service interfaces/schemas |
Configuration
Configuring the installation can be roughly distributed across the following steps: configuring the environment, the container, the gHN associated with a running instance of the container, and the operation of the gHN in a secure infrastructure.
Configuring the Environment
Define an environment variable GLOBUS_LOCATION and point it to the installation directory. Assuming a bash shell:
export GLOBUS_LOCATION = ...absolute path to your gCore location...
Adding $GLOBUS_LOCATION/bin to your PATH environment variable is also highly recommended:
export PATH = $PATH:$GLOBUS_LOCATION/bin
Finally, build gCF form sources requires to set an environment variable BUILD_LOCATION to the location from which ant will be invoked and where temporary build structures and artefacts will be located:
export BUILD_LOCATION = ...absolute path to your build location...
Configuring the Container
Specify the hostname of your machine as the value of logicalHost parameter in the container's configuration file $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd:
<parameter name="logicalHost" value="..yourhostname..."/>
Configuring the gHN
In the gHN's configuration file $GLOBUS_LOCATION/config/GHNConfig.xml, override wherever appropriate the default values of the following properties:
securityenabled | true if the gHN can operate in a secure infrastructure, false otherwise. |
mode | in a development' mode, the gHN does not publish its own profile as well as those of the deployed Running Instances in the infrastructure. In a production mode, it does. |
rootVO | the rootVO of the gHN. |
defaultVO | the defaultVO of the gHN. |
infrastructure | the infrastructure of the gHN. |
labels | [coming soon] |
rootGHN | [coming soon] |
GHNtype | [coming soon] |
localProxy | [coming soon] |
coordinates | A pair of comma-separated values for the latitude and longitude of the gHN. Coordinates for some popular locations are available here. |
country | [coming soon] |
location | [coming soon] |
updateInterval | [coming soon] |
Configuring Logging
A running gCore container will produce extensive logs in accordance with the log4j configuration directives container in $GLOBUS_LOCATION/container-log4j.properties. By default, the container logs in a file container.log and with a DEBUG level for all the gCore code components. container.log is created in the location from which the container is started.
Install host credentials
Copy host certificate and private key respectively in:
-
/etc/grid-security/hostpubliccert.pem
(please check that the certificate file has -rw-r--r-- permissions) -
/etc/grid-security/hostprivatekey.pem
(please check that the private key file has -r-------- permissions).
Both certificate and private key must be owned by the user that runs the container.
You can obtain host credentials (certificate and private key from an official Certification Authority)
Configure container security
Set Global security descriptor of Java-WS-Core container in file $GLOBUS_LOCATION/etc/globus_wsrf_core/global_security_descriptor.xml
.
See Media:global_security_descriptor.xml example.
File:Alert icon2.gif Please be sure to properly set the <context-timer-interval value="300000"/> tag to ease the effect of the GSISecureConversation memory leak problem in the Java-WS-Core (see example above).
Modify the $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd file adding following lines inside the <globalConfiguration>
tag:
<parameter name="containerSecDesc" value="etc/globus_wsrf_core/global_security_descriptor.xml"/>
(of course you have to replace yourHostName and yourDomain properties with correct values, E.g: grids15.eng.it)
Configure VOMS credentials
VOMS credentials must be installed in the local system to verify VOMS assertions. To do this first of all copy in the /etc/grid-security/vomsdir
directory certificates of trusted VOMS servers (please check that certificate files have -rw-r--r--
permissions).
You also need to create vomses files in /opt/glite/etc/vomses
. These files should follows this naming convention:
<name of the VO>-<hostname of the VOMS service>
(E.g: grids01.gcore.core
)
The content of each file must be as follows (on one single line):
"<name of the VO>" "<hostname of the VOMS service>" "<port of the VOMS service>" "<Distinguished Name of the VOMS certificate>" "<local name of the VO>"
E.g:
"gCore" "grids01.gcore.org" "15001" "/C=IT/O=INFN/OU=Host/L=GCORE/CN=grids01.gcore.org" "gCore"
File:Info.gif Please notice that the VO name 'gCore' should be associated to the VOMS service running on grids01.gcore.org, this will assure to properly validate assertions contained in proxy credentials
Verify the Installation
To verify the installation, start the container with the script $GLOBUS_LOCATION/bin/gcore-start-container. Assuming PATH is set as recommended above:
gcore-start-location
will suffice. Any instance of the container which is already running should be automatically kill-ed and the new instance should log the list of deployed services in nohup.out and detailed information about the startup of local services in container.log. Both files will be created in location from which the container was started, Lack of visible errors in both files indicates a successful gCore installation.