Difference between revisions of "Administrator Guide"
Manuele.simi (Talk | contribs) (→Verify the Installation) |
(→Verify the Installation) |
||
Line 202: | Line 202: | ||
== Verify the Installation == | == Verify the Installation == | ||
− | To verify the installation, start the container with the script | + | To verify the installation, start the container with the script <code>$GLOBUS_LOCATION/bin/gcore-start-container</code>. Assuming <code>PATH</code> is set as recommended above: |
<pre>gcore-start-container</pre> | <pre>gcore-start-container</pre> | ||
− | will suffice. Any instance of the container which is already running should be automatically ''kill''-ed and the new instance should log the list of deployed services in | + | will suffice. Any instance of the container which is already running should be automatically ''kill''-ed and the new instance should log the list of deployed services in <code>$GLOBUS_LOCATION/nohup.out <code> and detailed information about the startup of local services in <code>$GLOBUS_LOCATION/logs/container.log</code>. |
+ | Lack of visible errors in both files indicates a successful gCore installation and startup. | ||
− | By default, the command above starts the container on the | + | :'''Note:'''By default, the command above starts the container on the port <code>8080</code>. To switch on another port, use the <code>-p <port></code> option. |
Revision as of 17:36, 5 May 2009
This Guide covers the installation, configuration, and maintenance of gCore.
Prerequisites
The following are prerequisite for the installation of gCore:
- A platform compatible GT requirements.
-
J2SE 1.5.08 SDK
or greater. Sun's reference implementation is recommended, but versions from IBM, HP, or BEA should work equally well.
-
Ant 1.6.5+
to build gCF sources or to develop services with it.
- an SVN client to install gCore from the SVN repository.
-
GNU tar
to install gCore from archived distributions.
-
sudo
privileges on the shell.
The following are pre-requisites for the operation of gHN in any infrastructure:
- A static IP address and preferably a DNS name.
The following are pre-requisites for the operation of gHN in a secure infrastructure:
- A ntp server to synchronise the machine's clock for correct credential validation.
- A host certificate and private key (owned by the user that runs the container) respectively in:
/etc/grid-security/hostpubliccert.pem
(please check that the certificate file has -rw-r--r-- permissions)/etc/grid-security/hostprivatekey.pem
(please check that the private key file has -r-------- permissions).
Installation
Once downloaded, gCore can be installed in a directory of choice (the gCore location). In either case, proceed to the installation as a a non-privileged user with read and write permissions on the gCore location.
The structure of the installation is the following:
|-bin | |-config | |-endorsed | |-etc | |-lib | |-libexec | |-logs | |-share
Some folders are of immediate interest to administrators and developers alike:
bin
|
executables. |
config
|
gHN configuration files. |
etc
|
configuration files of container's and deployed service. |
lib
|
standard and deployed service libraries. |
logs
|
Log files for gHN, Local Services, and legacy technologies. |
share
|
build tools, standard and deployed service interfaces and schemas. |
Configuration
Configuring the installation can be roughly distributed across the following steps: configuring the environment, the container, the gHN associated with a running instance of the container, and the operation of the gHN in a secure infrastructure.
Configuring the Environment
- Define an environment variable
GLOBUS_LOCATION
and point it to the gCore location. Assuming a bash shell:
- Define an environment variable
export GLOBUS_LOCATION = ...absolute path to your gCore location...
- (optional) Add
$GLOBUS_LOCATION/bin
to the value of yourPATH
environment variable.:
- (optional) Add
export PATH = $PATH:$GLOBUS_LOCATION/bin
- (optional) If building gCF-compliant services, define an environment variable
BUILD_LOCATION
and set it to the location from whichant
will be invoked and where temporary build structures and artefacts will be located:
- (optional) If building gCF-compliant services, define an environment variable
export BUILD_LOCATION = ...absolute path to your build location...
Configuring the Container
Specify the hostname of your machine as the value of logicalHost parameter in the container's configuration file $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd:
<parameter name="logicalHost" value="..yourhostname..."/>
Configuring the gHN
The configuration of the gHN that relates to its operation within the infrastructure and can be found in $GLOBUS_LOCATION/config/GHNConfig.xml
. The file $GLOBUS_LOCATION/config/GHNConfig.client.xml
can be used to dedicate a separate configuration to a gHN that operates in client mode.
The following gHN properties are available for configuration:
securityenabled
|
true if the gHN can operate in a secure infrastructure, false otherwise.
|
mode
|
either CONNECTED or STANDALONE depending on whether the gHN does or does not publish information in the infrastructure.
|
infrastructure
|
the name of the infrastructure in which the gHN operates. (e.g. gcube , d4science ,...).
|
startScopes
|
a comma-separated list of VOs that the gHN joins. |
labels
|
the name of the file that includes custom labels to characterize the gHN. These are added to those automatically derived by gCore and published in the gHN profile. The file name must be relative to the $GLOBUS_LOCATION/config directory.
|
GHNtype
|
either DYNAMIC or STATIC depending on whether the gHN can or cannot be used as a target for dynamic deployment operations.
|
localProxy [optional]
|
the name of a file with credentials used by gHN when no delegated credentials are available. The file name must be relative to the $GLOBUS_LOCATION/config directory.
|
coordinates
|
a pair of comma-separated values for the latitude and longitude of the gHN. Coordinates for some popular locations are available here. |
country
|
the two-character ISO code of the Country where the gHN is located. |
location
|
the name of the location. |
updateInterval
|
how often the gHN must has to refresh its profile on the IS (in seconds). |
portRange [optional]
|
a dash-separated pair of numbers that identify a range of free ports, if any. |
For example, the configuration required to join the gHN to the /gcube/devsec
and /gcube/testing
VOs is the following:
infrastructure = gcube startScopes = devsec,testing
For an in-depth coverage of scope and scope-related parameters (infrastructure and startScopes) see the Developer Guide.
Configuring Logging
A running gCore container will produce extensive logs in accordance with the log4j configuration directives container in $GLOBUS_LOCATION/container-log4j.properties
. By default, the container logs in a file called $GLOBUS_LOCATION/logs/container.fulllog
with a TRACE
level for all the gCF components, and in $GLOBUS_LOCATION/logs/container.log
with a INFO
level for all the gCF components. Local Services have also dedicated file loggers.
Configure container security
- Set the Security Descriptor of the underlying container in
$GLOBUS_LOCATION/etc/globus_wsrf_core/global_security_descriptor.xml
, using this example as a guide.
- Set the Security Descriptor of the underlying container in
- Note: Please include the configuration
<context-timer-interval value="300000"/> to ease the effect of a well-known bug with the underlying technologies (as in the example above).
- Note: Please include the configuration
- Add the following configuration in the <code><globalConfiguration> section of
$GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd
:
- Add the following configuration in the <code><globalConfiguration> section of
<parameter name="containerSecDesc" value="etc/globus_wsrf_core/global_security_descriptor.xml"/>
Configure VOMS credentials
VOMS credentials must be installed in the local system to verify VOMS assertions. To do this:
- Copy the certificates of trusted VOMS servers in
$GLOBUS_LOCATION/etc/grid-security/vomsdir
.
- Copy the certificates of trusted VOMS servers in
- Note: Please check that certificate files have
-rw-r--r--
permissions.
- Note: Please check that certificate files have
- Create VOMS files in
/opt/glite/etc/vomses
using the following conventions:
- Create VOMS files in
- file naming convention:
<VO Name>-<VOMS SERVICE HOSTNAME>
- content convention:
"<VO Name>" "<VOMS SERVICE HOSTNAME>" "<VOMS SERVICE PORT>" "<DN of VOMS CERTIFICATE>" "<VO LOCAL NAME>"
- Example:
"devsec" :grids01.gcore.org" "15001" "/C=IT/O=INFN/OU=Host/L=GCORE/CN=grids01.gcore.org" "devsec"
- Example:
- Note: The VO name
devsec
should be associated to the VOMS service running ongrids01.gcore.org
. This will assure to properly validate assertions contained in proxy credentials.
- Note: The VO name
Install voms-proxy-init
command for local testing (optional)
- Download the required rpm and configuration file.
- Install rpms in the order in which they appear in the download page.
- Copy the configuration file to the directory
/etc/glite/profile.d/
- Copy the configuration file to the directory
- Modify the configuration file in accordance with with the local values of the environment variables
JAVA_HOME
andGLOBUS_LOCATION
.
- Modify the configuration file in accordance with with the local values of the environment variables
Verify the Installation
To verify the installation, start the container with the script $GLOBUS_LOCATION/bin/gcore-start-container
. Assuming PATH
is set as recommended above:
gcore-start-container
will suffice. Any instance of the container which is already running should be automatically kill-ed and the new instance should log the list of deployed services in $GLOBUS_LOCATION/nohup.out <code> and detailed information about the startup of local services in <code>$GLOBUS_LOCATION/logs/container.log
.
Lack of visible errors in both files indicates a successful gCore installation and startup.
- Note:By default, the command above starts the container on the port
8080
. To switch on another port, use the-p <port>
option.