Difference between revisions of "Administrator Guide"

From GCube System
Jump to: navigation, search
(Configuring Security)
m (Configuring the gHN)
 
(89 intermediate revisions by 8 users not shown)
Line 1: Line 1:
 
 
 
This Guide covers the installation, configuration, and maintenance of gCore.
 
This Guide covers the installation, configuration, and maintenance of gCore.
  
 
== Prerequisites ==
 
== Prerequisites ==
  
The following software is a prerequisite for the installation of gCore:
+
The following are prerequisite for the installation of gCore:
  
* A platform compatible or made compatible with [http://www.globus.org/toolkit/docs/4.0/admin/docbook/ch03.html#s-platform GT requirements].
+
* A platform compatible [http://www.globus.org/toolkit/docs/4.0/admin/docbook/ch03.html#s-platform GT requirements].
* J2SE 1.5.08 SDK or greater. [http://java.sun.com/javase/ Sun's] reference implementation is recommended, but versions from [http://www-128.ibm.com/developerworks/java/jdk/ IBM], [http://h18012.www1.hp.com/java/ HP], or [http://www.bea.com/framework.jsp?CNT=index.htm&FP=/content/products/weblogic/jrockit/ BEA] should work equally well.
+
* [http://ant.apache.org/ Ant 1.6.5+] to build gCF sources or to develop services with it.
+
* a SVN client to install gCore from the SVN repository.
+
* [http://www.gnu.org/software/tar/tar.html GNU tar] to install gCore from archived distributions.
+
* [http://www.courtesan.com/sudo/ sudo] to execute shell commands with controlled super-user privileges.
+
  
Running gCore in a secure infrastructure raises further prerequisites:
+
* <code>J2SE 1.6 update 4 SDK</code> or greater. [http://www.oracle.com/technetwork/java/javase/ Sun's] reference implementation is recommended, but versions from [http://www-128.ibm.com/developerworks/java/jdk/ IBM], [http://h18012.www1.hp.com/java/ HP], or [http://www.bea.com/framework.jsp?CNT=index.htm&FP=/content/products/weblogic/jrockit/ BEA] should work equally well.
  
* A [http://www.ntp.org/ ntp server] to synchronise your clock with other machines' for correct credential validation.  
+
* <code>[http://ant.apache.org/ Ant 1.6.5+] </code> to build gCF sources or to develop services with it.
* [coming soon]
+
  
Finally, at least static IP address (if not a DNS name) is needed for all but the simplest testing scenarios.
+
* an SVN client to install gCore from the SVN repository.
  
== Installation ==
+
* <code>[http://www.gnu.org/software/tar/tar.html GNU tar]</code> to install gCore from archived distributions.
  
gCore may be installed from a SVN repository, or else from pre-packaged archives.  
+
* <code>[http://www.courtesan.com/sudo/ sudo]</code> privileges on the shell.
 +
 
 +
The following are pre-requisites for the operation of gHN in any infrastructure:
 +
 
 +
* A static IP address and preferably a DNS name.
 +
 
 +
The following are pre-requisites for the operation of gHN in a secure infrastructure:
 +
 
 +
* A [http://www.ntp.org/ ntp server] to synchronise the machine's clock for correct credential validation.
 +
 
 +
* A host certificate and private key owned by the user that runs the container and stored in an arbitrary position on the machine: the paths must be set in the ''global security descriptor'' file, as described in [https://gcube.wiki.gcube-system.org/gcube/index.php/GHN_Security_Configuration#Security_Configuration_provided GHN security configuration] section. In the most of cases the certificate and the key are stored respectively in:
 +
 
 +
  <code>/etc/grid-security/hostpubliccert.pem</code> (please check that the certificate file has -rw-r--r-- permissions)
 +
  <code>/etc/grid-security/hostprivatekey.pem</code> (please check that the private key file has -r-------- permissions).
 +
 
 +
* The public keys of the certification authorities to be accepted by the GHN,owned by the user that runs the container and stored in:
 +
 
 +
<code>/etc/grid-security/certificates</code> (the permissions are -rw-r--r--)
 +
 
 +
for further information, please refer to [https://gcube.wiki.gcube-system.org/gcube/index.php/GHN_Security_Configuration gCube security configuration] section.
 +
 
 +
== Installation ==
  
In the first case, installing gCore is tantamount to [[Downloads |downloading]] it into a directory of choice, the ''gCore location''. In the second case, installing gCore is simply matter of expanding the downloaded archive into the the gCore location. In either case, proceed to the installation as a a non-privileged user with read and write permissions for the gCore location.  
+
Once [[Status_&_Downloads|downloaded]], gCore can be installed in a directory of choice (the ''gCore location''). In either case, proceed to the installation as a a non-privileged user with read and write permissions on the gCore location. Due to some technical constraints, the current version of gCore requires that different installations must run under different users, i.e. the same user cannot configure and execute more than one container.  
  
At the end of the process, the gCore location should contain the following structure:
+
The structure of the installation is the following:
  
 
<pre>
 
<pre>
Line 40: Line 54:
 
|
 
|
 
|-libexec
 
|-libexec
 +
|
 +
|-logs
 
|
 
|
 
|-share
 
|-share
Line 48: Line 64:
 
{| class="wikitable" border="1"
 
{| class="wikitable" border="1"
 
|-
 
|-
| '''bin'''
+
| '''<code>bin</code>'''
| executables
+
| executables.
 
|-
 
|-
| '''config'''
+
| '''<code>config</code>'''
| gHN configuration files
+
| gHN configuration files.
 
|-
 
|-
| '''etc'''
+
| '''<code>etc</code>'''
| container and deployed service configuration files
+
| configuration files of container's and deployed service.
 
|-
 
|-
| '''lib'''
+
| '''<code>lib</code>'''
| standard and deployed service libraries
+
| standard and deployed service libraries.
 
|-
 
|-
| '''share'''
+
| '''<code>logs</code>'''
| build tools, standard and deployed service interfaces/schemas
+
| Log files for gHN, Local Services, and legacy technologies.
 +
|-
 +
| '''<code>share</code>'''
 +
| build tools, standard and deployed service interfaces and schemas.
 
|}
 
|}
 +
 +
== Third-party software ==
 +
gCore ships a number of third party products. The source material is copyright of the original publishers and software are governed by the terms and conditions of the third-party software.
 +
 +
Here it is a complete list grouped by provider.
 +
 +
'''Apache Software Foundation (AFS) ANT'''
 +
* ant-launcher 1.6.5
 +
* ant 1.6.5
 +
* antlr 2.7.6
 +
 +
'''ASF AXIS'''
 +
* addressing 1.0
 +
* axis 1.2RC (globus patched)
 +
* saaj 1.2RC
 +
* jaxrpc 1.2RC
 +
* axis-url 1.2.6
 +
* wsdl4j 1.2RC 
 +
                                         
 +
'''ASF XML'''
 +
* resolver 1.1.1
 +
* xercesImpl 2.6.2
 +
* xml-apis 2.6.2
 +
* xmlsec 1.2.1
 +
* xalan 2.6
 +
                     
 +
'''ASF COMMONS'''
 +
* commons-beanutils 1.6.1
 +
* commons-cli 2.0
 +
* commons-collections 3.0
 +
* commons-digester 1.2
 +
* commons-discovery 0.2dev
 +
* commons-io 1.2
 +
* commons-lang 2.4
 +
* commons-logging 1.1.1
 +
 +
'''Tomcat 4.1 '''
 +
* naming-java 4.1
 +
* naming-resources 4.1
 +
* naming-factory 4.1
 +
* naming-common 4.1
 +
 +
'''GLOBUS 4.0.x'''
 +
* cog-axis
 +
* cog-jglobus
 +
* cog-tomcat
 +
* cog-url
 +
* puretls 0.9b4
 +
* cryptix-asn1 ?
 +
* cryptix.jar ?
 +
* cryptix32 3.2.0
 +
* bootstrap ?,
 +
* globus_usage_core
 +
* globus_usage_packets_common
 +
* globus_wsrf_mds_aggregator
 +
* globus_wsrf_mds_aggregator_stubs                                               
 +
* globus_wsrf_servicegroup
 +
* globus_wsrf_servicegroup_stubs,                       
 +
* wsrf_common
 +
* wsrf_core
 +
* wsrf_mds_index_stubs
 +
* wsrf_mds_usefulrp
 +
* wsrf_test
 +
* wsrf_tools
 +
* wsrf_mds_usefulrp_schema_stubs
 +
* wsrf_provider_jce
 +
* wsrf_core_stubs
 +
* wsrf_mds_index
 +
                                     
 +
'''gLite'''
 +
* glite-security-util-java 1.3.4
 +
 +
'''MISC'''
 +
* cglib 2.2
 +
* objenesis 1.1
 +
* bcprov-jdk14 1.2.2
 +
* jce-jdk13 1.2.5
 +
* concurrent ?
 +
* SUN servlet.jar 2.3/1.2(JSP)
 +
* opensaml 1.0.1 (globus patched)
 +
* kxml2 2.3.0
 +
* log4j 1.2.15
 +
* jgss ?
 +
* junit 3.8.1
 +
* wss4j ?
 +
* SUN jsr173_api ?
 +
* BEA commonj 1.1
 +
* Jaxen XPath library - jaxen-1.1-beta-9.jar
  
 
== Configuration ==
 
== Configuration ==
Line 70: Line 177:
 
=== Configuring the Environment ===
 
=== Configuring the Environment ===
  
Define an environment variable '''GLOBUS_LOCATION''' and point it to the installation directory. Assuming a bash shell:
+
:*Define an environment variable '''<code>GLOBUS_LOCATION</code>''' and point it to the gCore location. Assuming a bash shell:
  
<pre>export GLOBUS_LOCATION = ...absolute path to your gCore location...</pre>
+
:<pre>export GLOBUS_LOCATION=absolute path to your gCore location</pre>
  
Adding ''$GLOBUS_LOCATION/bin'' to your '''PATH''' environment variable is also highly recommended:
+
:* (optional) Add ''<code>$GLOBUS_LOCATION/bin</code>'' to the value of your '''<code>PATH</code>''' environment variable.:
  
<pre>export PATH = $PATH:$GLOBUS_LOCATION/bin</pre>
+
:<pre>export PATH=$PATH:$GLOBUS_LOCATION/bin</pre>
  
Finally, build gCF form sources requires to set an environment variable '''BUILD_LOCATION''' to the location from which ant will be invoked and where temporary build structures and artefacts will be located:
+
:* (optional) If building gCF-compliant services, define an environment variable '''<code>BUILD_LOCATION</code>''' and set it to the location from which <code>ant</code> will be invoked and where temporary build structures and artefacts will be located:
 +
 
 +
:<pre>export BUILD_LOCATION=absolute path to your build location</pre>
  
<pre>export BUILD_LOCATION = ...absolute path to your build location...</pre>
 
 
 
=== Configuring the Container ===
 
=== Configuring the Container ===
  
Line 87: Line 194:
  
 
<pre><parameter name="logicalHost" value="..yourhostname..."/></pre>
 
<pre><parameter name="logicalHost" value="..yourhostname..."/></pre>
+
 
 +
In the default configuration, the container typically allocates '''1GB''' of heap space to the JVM in which it runs. This is a production-level requirement and can be increased by setting new parameters in the <code>$GCORE_START_OPTIONS</code> variable, either by editing the script <code>$GLOBUS_LOCATION/bin/gcore-start-container</code> or by adding it to the execution environment. If one wants to decrease the memory used, it is needed to edit the script, since, due to the Java Virtual Machine behavior, in case of duplicated setting, the higher setting is considered.
 +
 
 +
Moreover, any setting reported in the  <code>$GCORE_START_OPTIONS</code> variable is passed to the container process and evaluated by the JVM.
 +
 
 
=== Configuring the gHN ===
 
=== Configuring the gHN ===
  
In the gHN's configuration file ''$GLOBUS_LOCATION/config/GHNConfig.xml'', override wherever appropriate the [[Default GHNConfig.xml |default values]] of the following properties:
+
The configuration of the gHN that relates to its operation within the infrastructure and can be found in <code>$GLOBUS_LOCATION/config/GHNConfig.xml</code>. The file <code>$GLOBUS_LOCATION/config/GHNConfig.client.xml</code> ''can'' be used to dedicate a separate configuration to a gHN that operates in [[Contexts#The_gHN_Context|client mode]].
 +
 
 +
The following gHN properties are available for configuration:
 +
 
  
 
{| class="wikitable" border="1"
 
{| class="wikitable" border="1"
 
|-
 
|-
| '''securityenabled'''
+
| '''<code>securityenabled</code>'''
| ''true'' if the gHN can operate in a secure infrastructure, ''false'' otherwise.
+
| <code>true</code> if the gHN can operate in a secure infrastructure, <code>false</code> otherwise.
 
|-
 
|-
| '''mode'''
+
| '''<code>accountingenabled</code>'''
| in a ''development' mode, the gHN does not publish its own profile as well as those of the deployed Running Instances in the infrastructure. In a ''production'' mode, it does.  
+
| <code>true</code> if the gHN must account using the gcube accounting system on every call received, <code>false</code> otherwise.
 
|-
 
|-
| '''rootVO'''
+
| '''<code>mode</code>'''
| the rootVO of the gHN.
+
| either '''<code>CONNECTED</code>''' or '''<code>STANDALONE</code>''' depending on whether the gHN does or does not publish information in the infrastructure.  
 
|-
 
|-
| '''defaultVO'''
+
| '''<code>infrastructure</code>'''
| the defaultVO of the gHN.
+
| the name of the infrastructure in which the gHN operates. (e.g. <code>gcube</code>, <code>d4science</code>,...).
 
|-
 
|-
| '''infrastructure'''
+
| '''<code>startScopes</code>'''
| the infrastructure of the gHN.
+
| a comma-separated list of VOs that the gHN joins.
 
|-
 
|-
| '''labels'''
+
| '''<code>allowedScopes</code>'''
| [coming soon]
+
| a comma-separated list of VOs that the gHN will potentially join (upon VO Manager decision).
 
|-
 
|-
| '''rootGHN'''
+
| '''<code>labels</code>'''
| [coming soon]
+
| the name of the file that includes custom labels to characterize the gHN.  These are added to those automatically derived by gCore and published in the gHN profile. The file name must be relative to the <code>$GLOBUS_LOCATION/config</code> directory.
 
|-
 
|-
| '''GHNtype'''
+
| '''<code>GHNtype</code>'''
| [coming soon]
+
| either '''<code>DYNAMIC</code>''' or '''<code>STATIC</code>''' depending on whether the gHN can or cannot be used as a target for dynamic deployment operations.
 
|-
 
|-
| '''localProxy'''
+
| '''<code>coordinates</code>'''
| [coming soon]
+
| a pair of comma-separated values for the latitude and longitude of the gHN. Coordinates for some popular locations are available [[gHN Coordinates|here]].
 
|-
 
|-
| '''coordinates'''
+
| '''<code>country</code>'''
| A pair of comma-separated values for the latitude and longitude of the gHN. Coordinates for some popular locations are available [[gHN Coordinates|here]].
+
| the two-character [http://www.iso.org/iso/english_country_names_and_code_elements ISO code] of the Country where the gHN is located.  
 
|-
 
|-
| '''country'''
+
| '''<code>location</code>'''
| [coming soon]
+
| the name of the location.
 
|-
 
|-
| '''location'''
+
| '''<code>publishedHost</code>'''
| [coming soon]
+
| the hostname to declare in the GHN and Running Instance profiles, if different from the actual one
 
|-
 
|-
| '''updateInterval'''
+
| '''<code>publishedPort</code>'''
| [coming soon]
+
| the port to declare in the GHN and Running Instance profiles, if different from the actual one
 +
|-
 +
| '''<code>updateInterval</code>'''
 +
| how often the gHN must has to refresh its profile on the IS (in seconds).
 +
|-
 +
| '''<code>portRange</code>''' [optional]
 +
| a dash-separated pair of numbers that identify a range of free ports, if any.
 +
|-
 +
| '''<code>testInterval</code>''' [optional]
 +
|  how often the monitoring Probes have to perform local test on the gHN
 
|}
 
|}
 +
 +
For example, the configuration required to join the gHN to the <code>/gcube/devsec</code> and <code>/gcube/testing</code> VOs is the following:
 +
 +
<pre>
 +
infrastructure = gcube
 +
startScopes = devsec,testing
 +
</pre>
 +
 +
For an in-depth coverage of scope and scope-related parameters (infrastructure and startScopes) see the [[Scope_Management|Developer Guide]].
  
 
=== Configuring Logging ===
 
=== Configuring Logging ===
  
A running gCore container will produce extensive logs in accordance with the [http://logging.apache.org/log4j/ log4j] configuration directives container in ''$GLOBUS_LOCATION/container-log4j.properties''.  By default, the container logs in a file ''container.log'' and with a DEBUG level for all the gCore code components.
+
A running gCore container will produce extensive logs in accordance with the [http://logging.apache.org/log4j/ log4j] configuration directives container in <code>$GLOBUS_LOCATION/container-log4j.properties</code>.  By default, the container logs in a file called <code>$GLOBUS_LOCATION/logs/container.fulllog</code> with a <code>TRACE</code> level for all the gCF components, and in <code>$GLOBUS_LOCATION/logs/container.log</code> with a <code>INFO</code> level for all the gCF components. Local Services have also dedicated file loggers.
''container.log'' is created in the location from which the container is [[Administrator_Guide#Verify the Installation |started]].
+
  
===Install host credentials===
+
==== Configuring Access Logs purging====
  
Copy host certificate and private key respectively in:
+
Starting from GHN v 3.7.0, the GHN distribution contains a mechanism to clean access log files from the system ( that in some cases can occupy a considerable amount of space).
  
* <code>/etc/grid-security/hostpubliccert.pem</code> (please check that the certificate file has -rw-r--r-- permissions)
+
The script ''gcore-clean-accesslogs'' can be used to remove accesslogs older than 7 days ( configurable) and it can also be installed as cronjob via the ''gcore-clean-accesslogs-cron''. Both files are located under the $GLOBUS_LOCATION/bin folder of the GHN.
* <code>/etc/grid-security/hostprivatekey.pem</code> (please check that the private key file has -r-------- permissions).
+
  
Both certificate and private key must be owned by the user that runs the container.
+
In case of older versions of GHN distributions the files can be downloaded from:
+
You can obtain host credentials (certificate and private key from an official Certification Authority)
+
  
===Configure container security===
+
* http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/distributions/ghn-distribution-bundle/gCore/bin/gcore-clean-accesslogs
 +
* http://svn.research-infrastructures.eu/public/d4science/gcube/trunk/distributions/ghn-distribution-bundle/gCore/bin/gcore-clean-accesslogs-cron
  
Set Global security descriptor of Java-WS-Core container in file <code>$GLOBUS_LOCATION/etc/globus_wsrf_core/global_security_descriptor.xml</code>.
+
and installed under the $GLOBUS_LOCATION/bin folder.
  
See [[Media:global_security_descriptor.xml]] example.
+
PLEASE NOTE : the scripts assume that the GLOBUS_LOCATION var is set inside the $HOME/.profile or $HOME/.bashrc
  
[[Image:Alert_icon2.gif]] Please be sure to properly set the '''<context-timer-interval value="300000"/>'''
+
===Configuring container security===
tag to ease the effect of the GSISecureConversation memory leak problem in the Java-WS-Core
+
(see example above).
+
  
 +
//OUTDATED SECTION
 +
Detailed information about secure container configuration is provided in [https://gcube.wiki.gcube-system.org/gcube/index.php/GHN_Security_Configuration gCube security configuration] section.
  
Modify the $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd file adding following lines inside the <code><globalConfiguration></code> tag:
+
Detailed information about not default settings (for example, interoperability with secure external infrastructure or with a VOMS server, is availabled in [https://gcube.wiki.gcube-system.org/gcube/index.php/Security_Library#Extension_Security_Libraries Security Libraries] section of gCube wiki.
+
//END OF OUTDATED SECTION
<parameter name="containerSecDesc" value="etc/globus_wsrf_core/global_security_descriptor.xml"/>
+
  
(of course you have to replace yourHostName and yourDomain properties with correct values, E.g: grids15.eng.it)
+
=== Supporting resource encryption/decryption ===
  
===Configure VOMS credentials===
+
If the gHN is foreseen to manage gCube encrypted resources, the [http://goo.gl/qTZO0j "AES symmetric key"] has to be downloaded and stored in the $GLOBUS_LOCATION/config folder.
 +
Alternatively a new compatible symmetric key could be generated by typing on linux shell the following:
  
VOMS credentials must be installed in the local system to verify VOMS assertions. To do this first of all copy in the <code>/etc/grid-security/vomsdir</code> directory certificates of trusted VOMS servers (please check that certificate files have <code>-rw-r--r--</code> permissions).
+
<pre> openssl rand -out symm.key 16 </pre>
  
You also need to create vomses files in <code>/opt/glite/etc/vomses</code>. These files should follows this naming convention:
+
== Verify the Installation ==
  
<name of the VO>-<hostname of the VOMS service>
+
To verify the installation, firstly start the container with the script <code>$GLOBUS_LOCATION/bin/gcore-start-container</code> in a not secure infrastructure, or <code>$GLOBUS_LOCATION/bin/gcore-start-secure-container</code> in a secure infrastructure. So, assuming <code>PATH</code> is set as recommended above:
  
(E.g: <code>grids01.gcore.core</code>)
+
<pre>gcore-start-container</pre>
 +
or
 +
<pre>gcore-start-secure-container</pre>
  
The content of each file must be as follows (on one single line):
+
will suffice. Any instance of the container which is already running should be automatically ''kill''-ed.
 +
By default, the commands above start the container on the port <code>8080</code> in a not secure infrastructure, <code>8443</code> in a secure infrastructure. To switch on another port, use the <code>-p <port></code> option.
  
"<name of the VO>" "<hostname of the VOMS service>" "<port of the VOMS service>"
+
Then, two steps can be performed to verify that the container is locally working fine:
"<Distinguished Name of the VOMS certificate>" "<local name of the VO>"
+
  
E.g:
+
# the new instance should log the list of deployed services in <code>$GLOBUS_LOCATION/nohup.out <code> and detailed information about the startup of local services in <code>$GLOBUS_LOCATION/logs/container.log</code>.  Lack of visible errors in both files indicates a successful gCore installation and startup.
 +
# stopping the container, which is an action that contacts the container itself. For this, use the appropriate stop command, according to your configuration:  
 +
<pre>gcore-stop-container</pre>
 +
or
 +
<pre>gcore-stop-secure-container</pre>
  
"gCore" "grids01.gcore.org" "15001" "/C=IT/O=INFN/OU=Host/L=GCORE/CN=grids01.gcore.org" "gCore"
+
Finally, for a full exploitation of the container (e.g. remote management and deployment, and having your services contacted), the host and port must be public and reachable from outside.
  
[[Image:Info.gif]] ''Please notice that the VO name 'gCore' should be associated to the VOMS service running on grids01.gcore.org, this will assure to properly validate assertions contained in proxy credentials''
+
== Troubleshooting ==
  
== Verify the Installation ==
+
=== gHN with a high number of invocations ===
 +
 
 +
If a gHN hosts a service instance subjects to a high number of invocations, it might happen that the gContainer stops to serve  callers' requests. This is due to mistaken usage of the "id" command of the underlying technology. Soon, the process will fall into a "too many files open" state and it basically does nothing from that moment on.
 +
 
 +
When such a condition is predictable, we suggest to create a temporary folder for the system commands, remove the "id" command from there and use that folder in the PATH instead the "official" one.
 +
 
 +
The complete workaround is the following:
 +
 
 +
* create a folder where to collect the system commands:
 +
 
 +
<pre>
 +
mkdir fakebin
 +
cd fakebin
 +
find /usr/bin -type f -exec ln  {}  \;
 +
find /usr/bin -type l -exec cp -a {} . \;
 +
rm id
 +
</pre>
  
To verify the installation, start the container with the script ''$GLOBUS_LOCATION/bin/gcore-start-container''. Assuming ''PATH'' is set as recommended above:
+
* remove the /usr/bin folder from the PATH  
  
<pre>gcore-start-location</pre>
+
* add the new fakebin folder to the PATH
  
will suffice. Any instance of the container which is already running should be automatically ''kill''-ed and the new instance should log the list of deployed services in ''nohup.out'' and detailed information about the startup of local services in ''container.log''. Both files will be created in location from which the container was started,  Lack of visible errors in both files indicates a successful gCore installation.
+
Of course, this could have impact on the other processes, therefore the new patched environment must be used only for the gContainer process.

Latest revision as of 11:45, 4 August 2017

This Guide covers the installation, configuration, and maintenance of gCore.

Prerequisites

The following are prerequisite for the installation of gCore:

  • J2SE 1.6 update 4 SDK or greater. Sun's reference implementation is recommended, but versions from IBM, HP, or BEA should work equally well.
  • Ant 1.6.5+ to build gCF sources or to develop services with it.
  • an SVN client to install gCore from the SVN repository.
  • GNU tar to install gCore from archived distributions.
  • sudo privileges on the shell.

The following are pre-requisites for the operation of gHN in any infrastructure:

  • A static IP address and preferably a DNS name.

The following are pre-requisites for the operation of gHN in a secure infrastructure:

  • A ntp server to synchronise the machine's clock for correct credential validation.
  • A host certificate and private key owned by the user that runs the container and stored in an arbitrary position on the machine: the paths must be set in the global security descriptor file, as described in GHN security configuration section. In the most of cases the certificate and the key are stored respectively in:
 /etc/grid-security/hostpubliccert.pem (please check that the certificate file has -rw-r--r-- permissions)
 /etc/grid-security/hostprivatekey.pem (please check that the private key file has -r-------- permissions).
  • The public keys of the certification authorities to be accepted by the GHN,owned by the user that runs the container and stored in:

/etc/grid-security/certificates (the permissions are -rw-r--r--)

for further information, please refer to gCube security configuration section.

Installation

Once downloaded, gCore can be installed in a directory of choice (the gCore location). In either case, proceed to the installation as a a non-privileged user with read and write permissions on the gCore location. Due to some technical constraints, the current version of gCore requires that different installations must run under different users, i.e. the same user cannot configure and execute more than one container.

The structure of the installation is the following:

|-bin
|
|-config
|
|-endorsed
|
|-etc
|
|-lib
|
|-libexec
|
|-logs
|
|-share

Some folders are of immediate interest to administrators and developers alike:

bin executables.
config gHN configuration files.
etc configuration files of container's and deployed service.
lib standard and deployed service libraries.
logs Log files for gHN, Local Services, and legacy technologies.
share build tools, standard and deployed service interfaces and schemas.

Third-party software

gCore ships a number of third party products. The source material is copyright of the original publishers and software are governed by the terms and conditions of the third-party software.

Here it is a complete list grouped by provider.

Apache Software Foundation (AFS) ANT

  • ant-launcher 1.6.5
  • ant 1.6.5
  • antlr 2.7.6

ASF AXIS

  • addressing 1.0
  • axis 1.2RC (globus patched)
  • saaj 1.2RC
  • jaxrpc 1.2RC
  • axis-url 1.2.6
  • wsdl4j 1.2RC

ASF XML

  • resolver 1.1.1
  • xercesImpl 2.6.2
  • xml-apis 2.6.2
  • xmlsec 1.2.1
  • xalan 2.6

ASF COMMONS

  • commons-beanutils 1.6.1
  • commons-cli 2.0
  • commons-collections 3.0
  • commons-digester 1.2
  • commons-discovery 0.2dev
  • commons-io 1.2
  • commons-lang 2.4
  • commons-logging 1.1.1

Tomcat 4.1

  • naming-java 4.1
  • naming-resources 4.1
  • naming-factory 4.1
  • naming-common 4.1

GLOBUS 4.0.x

  • cog-axis
  • cog-jglobus
  • cog-tomcat
  • cog-url
  • puretls 0.9b4
  • cryptix-asn1 ?
  • cryptix.jar ?
  • cryptix32 3.2.0
  • bootstrap ?,
  • globus_usage_core
  • globus_usage_packets_common
  • globus_wsrf_mds_aggregator
  • globus_wsrf_mds_aggregator_stubs
  • globus_wsrf_servicegroup
  • globus_wsrf_servicegroup_stubs,
  • wsrf_common
  • wsrf_core
  • wsrf_mds_index_stubs
  • wsrf_mds_usefulrp
  • wsrf_test
  • wsrf_tools
  • wsrf_mds_usefulrp_schema_stubs
  • wsrf_provider_jce
  • wsrf_core_stubs
  • wsrf_mds_index

gLite

  • glite-security-util-java 1.3.4

MISC

  • cglib 2.2
  • objenesis 1.1
  • bcprov-jdk14 1.2.2
  • jce-jdk13 1.2.5
  • concurrent ?
  • SUN servlet.jar 2.3/1.2(JSP)
  • opensaml 1.0.1 (globus patched)
  • kxml2 2.3.0
  • log4j 1.2.15
  • jgss ?
  • junit 3.8.1
  • wss4j ?
  • SUN jsr173_api ?
  • BEA commonj 1.1
  • Jaxen XPath library - jaxen-1.1-beta-9.jar

Configuration

Configuring the installation can be roughly distributed across the following steps: configuring the environment, the container, the gHN associated with a running instance of the container, and the operation of the gHN in a secure infrastructure.

Configuring the Environment

  • Define an environment variable GLOBUS_LOCATION and point it to the gCore location. Assuming a bash shell:
export GLOBUS_LOCATION=absolute path to your gCore location
  • (optional) Add $GLOBUS_LOCATION/bin to the value of your PATH environment variable.:
export PATH=$PATH:$GLOBUS_LOCATION/bin
  • (optional) If building gCF-compliant services, define an environment variable BUILD_LOCATION and set it to the location from which ant will be invoked and where temporary build structures and artefacts will be located:
export BUILD_LOCATION=absolute path to your build location

Configuring the Container

Specify the hostname of your machine as the value of logicalHost parameter in the container's configuration file $GLOBUS_LOCATION/etc/globus_wsrf_core/server-config.wsdd:

<parameter name="logicalHost" value="..yourhostname..."/>

In the default configuration, the container typically allocates 1GB of heap space to the JVM in which it runs. This is a production-level requirement and can be increased by setting new parameters in the $GCORE_START_OPTIONS variable, either by editing the script $GLOBUS_LOCATION/bin/gcore-start-container or by adding it to the execution environment. If one wants to decrease the memory used, it is needed to edit the script, since, due to the Java Virtual Machine behavior, in case of duplicated setting, the higher setting is considered.

Moreover, any setting reported in the $GCORE_START_OPTIONS variable is passed to the container process and evaluated by the JVM.

Configuring the gHN

The configuration of the gHN that relates to its operation within the infrastructure and can be found in $GLOBUS_LOCATION/config/GHNConfig.xml. The file $GLOBUS_LOCATION/config/GHNConfig.client.xml can be used to dedicate a separate configuration to a gHN that operates in client mode.

The following gHN properties are available for configuration:


securityenabled true if the gHN can operate in a secure infrastructure, false otherwise.
accountingenabled true if the gHN must account using the gcube accounting system on every call received, false otherwise.
mode either CONNECTED or STANDALONE depending on whether the gHN does or does not publish information in the infrastructure.
infrastructure the name of the infrastructure in which the gHN operates. (e.g. gcube, d4science,...).
startScopes a comma-separated list of VOs that the gHN joins.
allowedScopes a comma-separated list of VOs that the gHN will potentially join (upon VO Manager decision).
labels the name of the file that includes custom labels to characterize the gHN. These are added to those automatically derived by gCore and published in the gHN profile. The file name must be relative to the $GLOBUS_LOCATION/config directory.
GHNtype either DYNAMIC or STATIC depending on whether the gHN can or cannot be used as a target for dynamic deployment operations.
coordinates a pair of comma-separated values for the latitude and longitude of the gHN. Coordinates for some popular locations are available here.
country the two-character ISO code of the Country where the gHN is located.
location the name of the location.
publishedHost the hostname to declare in the GHN and Running Instance profiles, if different from the actual one
publishedPort the port to declare in the GHN and Running Instance profiles, if different from the actual one
updateInterval how often the gHN must has to refresh its profile on the IS (in seconds).
portRange [optional] a dash-separated pair of numbers that identify a range of free ports, if any.
testInterval [optional] how often the monitoring Probes have to perform local test on the gHN

For example, the configuration required to join the gHN to the /gcube/devsec and /gcube/testing VOs is the following:

 infrastructure = gcube 
 startScopes = devsec,testing

For an in-depth coverage of scope and scope-related parameters (infrastructure and startScopes) see the Developer Guide.

Configuring Logging

A running gCore container will produce extensive logs in accordance with the log4j configuration directives container in $GLOBUS_LOCATION/container-log4j.properties. By default, the container logs in a file called $GLOBUS_LOCATION/logs/container.fulllog with a TRACE level for all the gCF components, and in $GLOBUS_LOCATION/logs/container.log with a INFO level for all the gCF components. Local Services have also dedicated file loggers.

Configuring Access Logs purging

Starting from GHN v 3.7.0, the GHN distribution contains a mechanism to clean access log files from the system ( that in some cases can occupy a considerable amount of space).

The script gcore-clean-accesslogs can be used to remove accesslogs older than 7 days ( configurable) and it can also be installed as cronjob via the gcore-clean-accesslogs-cron. Both files are located under the $GLOBUS_LOCATION/bin folder of the GHN.

In case of older versions of GHN distributions the files can be downloaded from:

and installed under the $GLOBUS_LOCATION/bin folder.

PLEASE NOTE : the scripts assume that the GLOBUS_LOCATION var is set inside the $HOME/.profile or $HOME/.bashrc

Configuring container security

//OUTDATED SECTION Detailed information about secure container configuration is provided in gCube security configuration section.

Detailed information about not default settings (for example, interoperability with secure external infrastructure or with a VOMS server, is availabled in Security Libraries section of gCube wiki. //END OF OUTDATED SECTION

Supporting resource encryption/decryption

If the gHN is foreseen to manage gCube encrypted resources, the "AES symmetric key" has to be downloaded and stored in the $GLOBUS_LOCATION/config folder. Alternatively a new compatible symmetric key could be generated by typing on linux shell the following:

 openssl rand -out symm.key 16 

Verify the Installation

To verify the installation, firstly start the container with the script $GLOBUS_LOCATION/bin/gcore-start-container in a not secure infrastructure, or $GLOBUS_LOCATION/bin/gcore-start-secure-container in a secure infrastructure. So, assuming PATH is set as recommended above:

gcore-start-container

or

gcore-start-secure-container

will suffice. Any instance of the container which is already running should be automatically kill-ed. By default, the commands above start the container on the port 8080 in a not secure infrastructure, 8443 in a secure infrastructure. To switch on another port, use the -p <port> option.

Then, two steps can be performed to verify that the container is locally working fine:

  1. the new instance should log the list of deployed services in $GLOBUS_LOCATION/nohup.out <code> and detailed information about the startup of local services in <code>$GLOBUS_LOCATION/logs/container.log. Lack of visible errors in both files indicates a successful gCore installation and startup.
  2. stopping the container, which is an action that contacts the container itself. For this, use the appropriate stop command, according to your configuration:
gcore-stop-container

or

gcore-stop-secure-container

Finally, for a full exploitation of the container (e.g. remote management and deployment, and having your services contacted), the host and port must be public and reachable from outside.

Troubleshooting

gHN with a high number of invocations

If a gHN hosts a service instance subjects to a high number of invocations, it might happen that the gContainer stops to serve callers' requests. This is due to mistaken usage of the "id" command of the underlying technology. Soon, the process will fall into a "too many files open" state and it basically does nothing from that moment on.

When such a condition is predictable, we suggest to create a temporary folder for the system commands, remove the "id" command from there and use that folder in the PATH instead the "official" one.

The complete workaround is the following:

  • create a folder where to collect the system commands:
mkdir fakebin
cd fakebin
find /usr/bin -type f -exec ln  {}  \;
find /usr/bin -type l -exec cp -a {} . \;
rm id
  • remove the /usr/bin folder from the PATH
  • add the new fakebin folder to the PATH

Of course, this could have impact on the other processes, therefore the new patched environment must be used only for the gContainer process.