0.5 Release Notes

From GCube System
Jump to: navigation, search

gCore 0.5 introduces the following changes:

Changes Related to the gHN

  • a more streamlined distribution, with minimal runtime dependencies to legacy services for a faster gHN startup.
The number of Globus services that run in a gHN has been limited to those required to support publication and notification functionality. In particular, the historical dependency on Globus Registry Service has been finally eliminated and its output in nohup.out replaced with a native version that separates the display of gCube services from the display of (the surviving) Globus services. In addition, a large number of legacy configuration, dependencies, and test elements have been removed from the distribution. It is anticipated that this streamlining process will continue in future releases.
  • the integration of the ResultSet as a Local Service for simplified configuration, faster dynamic deployment, and query-by-reference support in gCore.
The ResultSet service and associated libraries are now distributed along with gCore in reflection of their widespread use within the infrastructure. This simplifies the definition and maintenance of the majority of service profiles (no need of explicit dependencies), streamlines dynamic deployment processes (less overhead), and makes it possible to support query-by-reference in gCore (see below).
  • a more accurate and gCube-compliant model of gHN lifecycle.
The gHN lifecycle now distinguishes a ready-ness (all the RIs are activated) from certification (all the RIs are operational), both at startup and during later phases. It also separates failure from a shutdown; in both cases, activated RIs are notified and a countdown to the end of the JVM process is started. The handling of all the phases of the gHN lifetime has significantly improved, with clear log feedback and timely publication in the infrastructure. As to the latter, gHN profiles now include an explicit field that marks their status.
  • more modular and informative logging for improved debugging and monitoring.
Local Services have now separate file appenders, and so does Globus. Logger propagation procedures have been improved to support separate file appenders for individual services (highly recommended during development). Log entries now report cumulative timings on a per-thread basis, and service calls are fully timed both in case of success and in case of failure. All logfiles are now gathered in a dedicated directory under the gCore installation.
  • new gHN management interface for local development and monitoring.
The gHN now offer a JMX interface for local monitoring and management of gHN, RIs, and loggers (typically via jconsole). Loggers levels, RI and gHN status, RI current scopes, RI call statistics, can all be inspected and/or acted upon for testing purposes. It is anticipated that further information will be exposes through the interface in future release.
  • automatic probing of free ports within configurable range for generalised approach to port binding.
The gHN configuration can now specify a range of ports and the gHN context can be asked to probe for the first free port in the range. The facility is currently used by the new management interface (see above), but it is generically available to all gHN clients.
  • simplified offline use of the gHN through dedicated configuration file.
The gHN context now uses a separate gHN configuration for offline use, which can diverge arbitrarily from the configuration used in online mode. This is specially intended to simplify the execution of clients that operate in scopes other than the gHN's.
  • new Service Maps.
Service Maps now requires a minimal configuration. In the previous releases they were overloaded of information about all the enabling services's URIs, including information about the IS deployment picture and hard to maintain. New Service Maps are extremely simplified and require only the URI of the Information Collector service of the Scope. Starting from it, all the rest of the enabling services's instances are dynamically discovered and exploited.

Changes Related to Services

  • full parallelisation of RI activation for faster gHN startup.
The activation of services within a starting gHN is now entirely asynchronous. This prevents lengthy staging activities (such as the remote state recovery announced below) of particular RIs to delay the others from reaching an operational status. On modern machines, it is more generally conducive to increased startup performance. As an added bonus, it minimises the risk of race conditions with the uncooperative threads of the underlying legacy tehnologies. The gHN merely ‘touches’ port-types and homes of deployed services and proceeds quickly to completion. After the service components have been awaken, they synchronise via the the exchange of events mediated by the service context. In a similar manner, the service context synchronises with the gHN context to report progress and failures. The service lifecycle now separates operational failures from gHN-induced shutdowns. The handling of all the phases of the RI lifetime has significantly improved, with clear log feedback and timely publication in the infrastructure.
  • new lifetime callbacks on port-type and home implementations for more modular lifecycle customisation.
All the lifetime callbacks previously defined for the service context (e.g. onInitialisation, onReady(), onFailure(), etc.) are now also available for port-type and home implementations.
  • RI synchronisation with co-deployed dependencies.
The logical dependencies of services may now impose a partial order on the activation of RIs. RIs will become operational after their dependencies are, if these are co-deployed on the same gHN. Cyclic dependencies are detected and broken.
  • new remote backup/recovery facilities for RI state (remote persistence).
Based on the configuration of a RI Persistence Manager, the entire persistent state of the RI - including WS-Resources and any other file placed under the storage root of the service - can now be transparently backed up to remote storage and transparently recovered from remote storage. Recovery occurs during the staging of the RI, if no local state can be found (as it would happen upon redeployment of the RI). Backup is performed at configurable intervals and conditionally to the occurrence of actual changes to the locally persisted state. gCF defines a framework for the definition of Persistence Managers, whereby persistence managers and persistence delegates exchange events via the mediation of the service context to automatically record the occurrence of changes to persisted WS-Resources. gCore distributes with a ready-to-use instantiation of the framework that targets the gCube Storage Management Service for remote persistence.
  • improved resource discovery.
The gCF interface for resource discovery now contemplates the execution of query by-reference, whereby results are delivered in an iterable stream of high-level object bindings for the discovered resources. This improves the reliability, efficiency, and responsiveness with which 'large' result sets can be processed. The reference implementation that ships with gCore implements the interface on top of the ResultSet client libraries that now have become local components of the gCore distribution. The implementation is still in progress, and currently reformulates by-reference query execution in terms of by-value query execution. due to dependencies on the refactorisation of the gCube Information Services that prvide the back-end of the implementation. However, developers can immediately refactor their code in terms of the new API and enjoy the actual use of streams in future
  • improved resource publication.
Resource publication now supports replication and distribution of remote IS instances. Reliability and a more fine grained error handling have been also added. From these release on, published profiles report the profile Schema version, which will lead to a better control and backward compatibility at infrastructure level. Service's state publication is now asynchronous with respect to the registrant request, allowing in this way a faster start up of the gHN and in general of the online operations.
  • improved remote event management.
Remote event management is about the exchange of information among the services' instances via a subscription/notification mechanism. The notification interface (the only thing the services are aware of) has been simplified by removing some inconsistent information. Remote event registration is now asynchronous with respect to the registrant request.
  • improved scope management for WS-Resources.
Scope checks at creation, retrieval, and removal of WS-Resources have been tightened whilst allowing retro-compatibility with legacy technologies.
  • improved lifetime management for WS-Resources.
Legacy components for WS-Resource lifetime management (expiry) have been entirely replaced with gCF components that ensure correct behaviour in the face of multi-scoped WS-resources.
  • improved locking mechanism for WS-Resources.
Legacy and proprietary locking mechanisms have been entirely replaced with gCF extensions of standard Java mechanisms. Framework and developers can now synchronise on the same locks for sensitive operations on shared data. Both read and write locks are available for increased performance, and write locks can be pre-emptively acquired for removal operations to transparently interrupt later lock acquisitions.
  • improved persistence management for WS-Resources.
Four explicit and configurable modes of operations have now been introduced for home implementations: transient, hard persistent, cached persistent and soft persistent. The modes reflect different trade-offs between access performance, memory consumption, and resilience of WS-Resource management.
  • improved local event management.
Events exchanged by gCF producers and gCF consumers are now transparently garbage collected after the expiration of their lifetime. Consumers can now request synchronous event delivery from producers so as to enforce stronger synchronisation constraints.
  • improved fault management.
Clients that wish to convert gCube faults into corresponding gCube exceptions can now benefit from a better conversion of remote stack-traces into local stack-traces.
  • new programming abstraction for service-to-service calls.
The best-effort strategies that already available through service handlers have been subsumed in higher-level call abstractions that may be used in the back-end of the service or, most appropriately, shipped with the stubs of a service for the immediate benefits of all its clients.

Changes Related to the Documentation

This now includes a much more reasoned and modular introduction to the motivations and requirements for WSRF-compliant mechanisms and standards, as well as sections on state persistence and publication. The level of details is now on part with the first section of the Primer, reflects the changes introduced in the release, and has been appropriately linked to and from the Developer's Guide.
This offers tutorial information about the interaction with the gCF interfaces for the subscription, production, and consumption of remote events.
  • new sections in the Developer's Guide.
Work on the Developer's Guide has now started, with a summary of the motivations for the design of the framework, comprehensive overview of the architecture and main components of the framework, an overview of the service model assumed by the framework, an overview of the configuration requirements, and a detailed presentation of the design and functionality of the most distinguished context components (the gHN's and the services'). This serves as an illustration of the style and extent of documentation planned for the rest of the Guide.