Elastic Batch Platform (EBP), Elastic Transaction Platform (ETP) and Elastic COBOL runtime environments are designed for scaleability and high-availability in a public or private cloud environment. First, we present the layout of a typical COBOL application (fig. 1) which divides the portions of the application into the typical 3-layer model: (1) presentation logic, (2) business logic and (3) database logic.
Fig. 1. Structure of typical COBOL application.
For online application the presentation logic may be a screen or GUI interface but with batch applications this may be categorized as the report writer section of the application. The Elastic COBOL runtime environment provides the compatibility framework for running the application as if it is operating on a mainframe:
The entire diagram of fig. 1 is placed in the cloud Platform-as-a-Service (PaaS) deployment package, shown in fig. 2 as “Standard COBOL application.”
Fig. 2. The deployment Platform-as-a-Service (PaaS) environment consisting of ETP, EBP and cloud-specific facilities.
The Heirloom Enterprise Deployment PaaS will provide the glue layers between the facilities the COBOL program expects to have available to it when running on the mainframe and what is provided by the underlying cloud platform. The deployment or Heirloom is deployed through Java EE Servers (e.g, IBM Websphere, Apache Geronimo) for ETP and/or Web Servlet containers (e.g., EMC tcServer, Apache Tomcat). The Heirloom components consists of the following components:
The entire diagram of fig. 2 (called Heirloom Deployment) is placed in the private cloud Infrastructure-as-a-Service (IaaS) environment provided by the hardware environment, as shown in fig. 3.
Fig. 3. The private cloud deployment Infrastructure-as-a-Service (IaaS) environment
Each of the boxes in fig. 3 represent a virtual machine running in the IaaS. In addition to the portions containing the ETP or EBP and customer applications (Heirloom Deployment), there are also VMs for the clustered database environment (e.g., PostGresSQL or SQL Fire).
To achieve scaleability in the batch environment, the Heirloom VMs containing EBP are started as demand for batch resources increases. The EBP starts when the VM it is running in is started. As each EBP starts it registers or contacts the centralized Heirloom Elastic Scheduler Platform (ESP) which relays batch job submissions from an external scheduler (e.g., Control-M Linux Agent). ESP also has the capability to define batch jobs and tasks (rules) for running these directly. Fig. 4. shows this interaction.
Fig. 4. The interaction between the ESP and EBP within the Heirloom virtual machines and external scheduling agents.
Let’s take the example where an external scheduler injects jobs into the system. As the batch job is injected by the external scheduler (e.g., Control-M Agent) which starts these steps:
Scaleability and high availability is achieved by instantiating more than one Heirloom/EBP virtual machine within an IaaS frame and among multiple frames. See fig. 5.
Fig. 5. Multiple JES virtual machines in a “JESPlex” cluster.
Within each virtual machine, either an EBP subsystem environment is part of the tcServer / Apache Tomcat started tasks. As needed (and following scheduler rules), one or more Job Classes are defined within EBP. Classes contain attributes for the jobs that will run under them: Elapsed and CPU time allotments, storage and network utilization limits. Also following scheduler rules, one or more Class Initiators are opened under each class. This allows a degree of parallelism within a virtual machine. Then, as demand grows the vCloud management infrastructure (acting under more rules) will start additional virtual machines. These VMs may be on the same IaaS frame or different frames. Each new EBP registers with the ESP, as described in fig. 4, and begins operating on batch jobs sent to it by ESP.
All batch shared data files (input and output datasets) are accessible to any VM via NFS. Shared datasets also contain the actual COBOL applications that will be executed by the Job Steps within the Batch Jobs executed within the Initiator. Dataset locking information communicated among the EBPs prevents batch jobs with exclusive access to resources from conflicting with other jobs requesting the same resources. Similarly, Input Spool (JCL), Output Spool (report SYSOUTs) and Temporary Spool (working files) are also shared among systems via NFS.
Should a VM or its subsystems (e.g., EBP) fail, batch jobs are re-queued into the Input Spool and dispatched to other waiting EBPs. In this way recovery is automatic. EMC Storage Frame components will ensure that the data stores themselves are replicated for availability purposes.