Archive for the ‘Blog’ Category

  • December 16th, 2017

    Application Transformation Done In 60 Seconds

    Mainframe COBOL/CICS/VSAM to Java Application Server & Relational DB

     

    TL;DR — Watch the video below to see how Heirloom PaaS automatically takes a complex mainframe warehousing application to Java in 60 seconds with 100% accuracy, guaranteed.

    Migrating mainframe workloads to anywhere is hard, right?

    You may have seen vendor presentations that promise an assured migration process, led by analysis tools that paint interrelationships between application artifacts that bedazzle (mislead) you into believing that the complexity is well understood. I get it. It looks good; impressive even.

    It’s also blatant vendor misdirection.

    What you are seeing is superficial at best. A “shiny object” that distracts you from the complexity ahead, and one that steers you towards an expensive multi-year services-led engagement that is aligned with the vendors business model, not yours.

    Just ask the vendor “where does the application get deployed?”. If the answer is not “any Java Application Server“, you are being quietly led into a dependency on a labyrinthic proprietary black-box that underpins an enforced application software architecture (e.g. MVC). Any assertions that you are now on an agile, open, scalable, performant platform, die right there.

    I’m not going to get into an extensive takedown (in this article) of why migration transformation toolsets that are borne of application analyzers are a hugely expensive strategic misstep because I’d like you to spend the next 60 seconds watching how astoundingly fast Heirloom PaaS is at transforming mainframe applications to Java.

    Need a recap? What you saw was a mainframe COBOL/CICS implementation of the TPC-C benchmark (an application with over 50,000 LOC and 7 BMS screens) being compiled by Heirloom PaaS (without any code changes) and deployed to a Java Application Server for immediate execution via a browser. All the data for this application was previously migrated from VSAM (EBCDIC encoded) to an RDBMS (ASCII encoded). In later articles, we’ll demonstrate how Heirloom PaaS migrates mainframe batch, data and security profiles just as quickly.

    Although the resulting application is 100% Java (and deployable on-premise or to any cloud), Heirloom PaaS provides full support (via Eclipse plug-ins) for on-going development of the application in the host language (COBOL in this example, but PL/I also) or in the target language (i.e. Java), or both. This was done because any transformation is not just about the application artifacts. People are obviously a big part of the IP equation, and securing the engagement of IT staff is essential to ensure a successful transformation. Not just on day 1, but for many years post-deployment.

     

  • December 5th, 2017

    Old Code is NOT Bad Code

    Voyager 1 fires thrusters for the first time in 37 years.

    Running a proprietary assembler program written over 40 years ago engineers at NASA were able to use the voyagers thrusters to replace attitude correction engine that had degraded over the last 40 years. This extends the lifetime of the voyager 1 probe for another 2 or 3 years.

    Is there are finer example than this that old code is not always bad code? Old code also continues to fulfill vital roles for NASA in the same way that it does for your business.

    I did not start to write this article with the expectation that I would be comparing mainframe applications to rocket science but when the facts fit…rocketship

    Just like the incredible continuing value of the assembler code running on Voyager 1 let us take a moment to remember the incredible value of the code living on your mainframe.

    Re-purposing hardware and executing decades old code on it has significantly extended voyage 1’s life expectancy. We can use the same analogy for existing mainframe applications.

    If we migrate the application using Heirloom PaaS we are changing the hardware environment but keeping the existing code and giving it a new lease of life. This will extend the lifetime of these applications and actually increase their value to your company. Migrated applications do not just run in a new environment, their data, previously hidden away in an EBCDIC silo, suddenly becomes accessible to the rest of the enterprise. Imagine adding decades of experience in the form of your companies data to a big data model designed to generate actionable business insights.

  • December 4th, 2017

    CI/CD with COBOL and CICS

    Introduction

    Reading about Continuous Integration, Continuous Deployment, CI/CD,  and unit testing etc all seem a million miles away from daily life on the Mainframe.

    In fact there is a basic question to be answered for all projects, “Why do we need to do continuous integration?”. Agile Guru, Martin Fowler puts it best (he usually does) when he says:

    Continuous Integration is a software development practice where members of a team integrate their work frequently; usually, each person integrates at least daily… Each integration is verified by an automated build… to detect integration errors as quickly as possible.

    Large software projects are plagued with integration issues, developer 1 changes code and breaks developer 2’s code. If you are not following a continuous integration approach this will only be discovered near release, resulting in many late nights for the developers and even more grey hairs for the project owners. It is at this point that coding standards fall precipitously and “hacking” a change in to “make it work” can ensure that the next project delivery starts with, often, severe technical debts. Unless you are the US government this debt will have to be paid resulting in longer release cycles and missing features.

    At Heirloom we develop our products and our code every single day following industry best practices and are extremely proud to call ourselves agile.

    That’s great for Heirloom as a commercial software development company producing a completely automated mainframe workload migration platform but what about our customers?

    Well our tooling enables our customer’s mainframe code to fit into industry best practice CI/CD environments and processes just like the one below. Because mainframe code compiled with Heirloom PaaS runs on the Java VM we are in the great position that we can utilize all the Java infrastructure that has been built in the last few years to support enterprise application development.

    continuous-integration-101-9-638

    How do we implement this?

    Lets start with the Developer Workstation. Heirloom PaaS is delivered as both a development environment and a set of run time Jar files.

    The development environment is an Eclipse Perspective. We add COBOL and PL/1 as fully supported languages to the industry standard Eclipse development environment.

    eclipse cobol

    We can follow a full development cycle here:

    EditCompileDebugTestCommit.

    The code being worked on in the image above is a standard CICS application presenting a BMS screen to the user. With Heirloom PaaS, BMS screen definitions are automatically transformed into HTML5 templates with the initial template design mimicking the green screen that mainframe users are familiar with. See an earlier post for a typical green screen sample program Mainframe Migration? Why?

    The version control server that we use at Heirloom is BitBucket (our clients have used hosted services like GitHubGitLabs or their own internal repositories).

    The Continuous Build Server that we use runs the  Jenkins open source automation server.

    All our notifications back to the developers are done using using slack channels.

    So what is the process flow?

    1. Developer makes and tests code changes in eclipse and commits them to the VCS
    2. A notification is sent to a slack channel, visible to all developersslack bitbucket
    3. The jenkins server is also notified of the change, checks the code out of the repository and starts the build, test and deploy process. Our build pipeline looks like this jenkins pipeline
    4. When all the tests are run Jenkins pushes the newly created artifacts to our overnight build area and creates versioned artifacts in a maven repository. Jenkins then notifies the developers that the build was good (or not) on a different slack channel.slack buildinfo

    Summary

    Heirloom PaaS migrates not just your mainframe code to a new platform, the Java VM. It also enables you to transform your developer and integration processes. When you use Heirloom Paas your mainframe projects and developers are now full citizens, and can adopt, industry best practices for continuous integration and deployment.

    A pre-requisite that many people may have noticed for all this to work is that there is a set of tests that can be run automatically as part of the build and integration process and that is the topic for the next article

  • November 29th, 2017

    Mainframe Migration? Why?

    This is part 1 of a series of articles investigating why organisations should consider Mainframe Migration, the options they have and the steps they need to take to transform their mainframe workloads to run on the cloud.

    Lets set the scene a little bit for a typical large organisation that runs business critical applications on the mainframe.

    1. Your mainframe and your mainframe applications just run.
    2. It causes no heartache, no troubles, when was the last time it “went down”?
    3. If it wasn’t for the pesky leasing payments and the endless contract renewal discussions you probably wouldn’t give it too much thought.

    But there are a few vague, uneasy, feelings:

    1. Perhaps you are not as responsive to your customers’ requirements as you could be.
    2. At month end, quarter end, year end, the batch jobs are only just completing in the batch window
    3. There are a lot of grey beards in the PL/1, COBOL teams
    4. Adding a new type of customer contract takes months with the mainframe screens and seemingly minutes with the web team.
    5. Lets face it a green screen does not have the possibilities to represent data that a modern web browser supports

    Would you rather be working with this green screen? A COBOL/CICS application that has been deployed to Pivotal Cloud Foundry using Heirloom PaaS.

    mort greenscreen

    Or with this Angular and D3 application that is also running on Pivotal Cloud Foundry and using exactly the same back end COBOL/CICS program to run the amortization calculation?

    mort angular

     

    Your companys’ “front-end” applications are being delivered using industry best practices, continuous integration and continuous deployment, CI/CD, of the applications is de rigueur and perhaps most importantly your customers are extremely happy with the features the responsiveness and the look and feel. These applications and the teams that create them are agile in a way that the Mainframe group is not.

    At the same time these web applications are now capable of the same reliability as your mainframe environment, not because every single component works all the time, but because every component has fail-over redundancy built in. It is a different reliability model to be sure but one to which the organisation is gradually becoming accustomed.

    Your company is also becoming accustomed to hardware costs for running the non mainframe applications to actually fall each year and yet still eke out higher and higher performance. Need to have a little performance kick for black Friday, just automatically scale out some more application instances, and pay for the extra throughput performance ONLY when it is needed.

    So now the scene is set, check back here for the next post where I will lay out various Mainframe Migration options and the pros and cons of each one.

  • August 16th, 2017

    COBOL/CICS Hooks Up With Pivotal Cloud Foundry

    http://bizarrocomics.com/

    How To Make An Agile Dinosaur

     

    Would you be skeptical of a claim from a vendor that stated you can take existing mainframe workloads (online and batch), and automatically transform them (with 100% accuracy) into instantly agile Java applications that can immediately be deployed to the cloud? You wouldn’t be alone. For many of our initial client meetings, there’s a palpable sense of disbelief (or, healthy skepticism if you prefer).

    So, here’s another 3-minute video from Ian White, Heirloom Computing’s VP of Engineering that demonstrates that claim, using Pivotal Cloud Foundry (PCF).

    TL;DW? These are the (simplified) steps:

     

    Want to try it? Execute the Mortgage App on PCF! This link will continue to work until the marketing budget has been exhausted 🙂

    What happened? We took an online mainframe application and deployed it to PCF in 3 minutes. No misdirection, real code, real results, and a re-platforming project lifecycle that puts you in control (so you can avoid black-box solutions).

    For us at Heirloom Computing, Cloud Foundry is a great example of how Heirloom PaaS maximizes the power of open source stacks to provide clients with a way to include high-value mainframe workloads in strategic initiatives (e.g. cloud, digital transformation etc). One that protects existing function, but also one that is seamlessly integrated with an agile ecosystem.

     

  • August 9th, 2017

    Amazon Alexa Hooks Up With COBOL/CICS (On The Cloud)

    @ChicagoPhotoSho (Twitter)

    Extending Heirloom PaaS Applications

     

    There’s a lot of chatter about how to make mainframe workloads agile. I have contributed to that chatter myself. The discourse is essential. Boiled down, my assertion is that the mainframe ecosystem is foundationally not agile (and never will be). No amount of DevOps tooling, nor vendor misdirection is going to change that.

    In my last article, I made the following statement:

    Mainframe workloads are an essential part of any digital transformation strategy, but those workloads will persist in a different form. One that protects existing function, but also one that is seamlessly integrated with an agile ecosystem.

    Below is a (3 minute) video that implements the above statement. It was put together inside 2 hours by Heirloom Computing’s VP of Engineering, Ian White.

    This was a mainframe application that was compiled (unchanged) to Java and executed on the cloud using Heirloom PaaS, which automatically makes the workload instantly agile (all transactions are immediately accessible as a service). Agile enough for Ian to very quickly hook it up to Alexa.

  • June 28th, 2017

    COBOL & The Definitive State of the World’s Greatest Legacy Ecosystem

    TL;DR — see picture above, or… a career COBOL’er makes a compelling argument that legacy application systems (COBOL et al) on the IBM Mainframe are killing IT digital transformation initiatives.

    So, a heads-up… this article is going to be self-serving (at least to start with, perhaps longer), as I’ve come to the conclusion that it is necessary for me to “introduce” myself in an attempt to establish a greater level of credibility than I might otherwise be able to muster!

    I’ve been working for over 30 years. My entire career has been in the “COBOL space”, the vast majority of it working with Global 2000 companies to deliver COBOL application development & deployment platforms that were primarily focused on adding value to the IBM Mainframe (“The World’s Greatest Legacy Ecosystem”).

    I have worked at the “coal face” developing bespoke commercial COBOL applications. I have worked developing COBOL compilers and runtimes. I have led global teams of astoundingly brilliant people that have built COBOL ecosystems from scratch. Back in 2010, myself and a group of others with similar career profiles, and significantly greater areas of expertise, founded Heirloom Computing to bring a new COBOL ecosystem to market.

    Heirloom PaaS leverages open-source software stacks (primarily Java); one that immediately exposes existing business rules from mainframe workloads as a collection of Java interfaces and RESTful services so they are immediately available to other applications; one that from day 1 is absolutely guaranteed to accurately retain existing business logic, data integrity, and security profiles; one that allows application developers (using Eclipse) to continue in COBOL, or Java, or both, so IT can “iterate away” from a constrained model to an agile one, at a pace that is determined by their own unique business drivers. This approach removes the “re-platforming” risk and makes the workload instantly agile.

    We did this because we believe (and our investors and customers have validated) that IT needs to get beyond decades-old legacy systems if they are going to compete in a digital world.

    Credibility enhanced? Either way, on we go…

    The IBM Mainframe is without a doubt (and by far) “The World’s Greatest Legacy Ecosystem”. It’s reliability, pervasiveness, and keeper of systems of record is unmatched. Today, however, that proud legacy is increasingly burdensome. These (crucial) systems: are severely & systemically constrained (and today, agility really matters); have paralyzed IT with a (fearful non-viable) “do nothing” strategy which consequently inhibits execution of strategic initiatives (like digital transformation) that are needed to compete. And up to this point, we’ve not even mentioned the operational expense nor the risks of an ever aging/depleting skills pool.

    Some of these systems, especially in government, have eroded/warped to the point that paper processes have been introduced to integrate legacy workloads with new services! This is NOT a failure of DevOps, nor tooling, but a failure of leadership and the brutal reality that mainframe systems of record are inherently NOT agile because a) they were never designed that way, and b) the COBOL ecosystem itself (an archaic compute-model, a procedural language, a failure to embrace open source, a lack of application frameworks, an entrenched culture, …) is NOT agile.

    In article, after article, after article, IT leaders and analysts have clearly identified the challenge. Progressive enterprises like GE and Capital One are already working on solutions. Mainframe workloads are an essential part of any digital transformation strategy, but those workloads will persist in a different form. One that protects existing function, but also one that is seamlessly integrated with an agile ecosystem.

  • March 29th, 2017

    Trump, Kushner, COBOL

    Not 3 words you’d immediately assemble together, but that’s exactly what Senior ComputerWorld Editor, Patrick Thibodeau, did yesterday.

    His article was prompted by a White House announcement of an “Office of American Innovation” to oversee the modernization of federal IT.

    The article then goes on to give Compuware a platform to launch a somewhat bizarre defense of COBOL, as if somehow, wrapping COBOL applications up in DevOps methodologies makes them agile, and consequently, the mainframe can be seen as (according to Chris O’Malley, Compuware’s President/CEO) “… a working environment that looks exactly like Amazon (Web Services)”.

    No. It’s not. There’s no amount of makeup that you can apply to my face to make me look like Brad Pitt. Fundamentally, all the required structures for that transformation just do not exist.

    There’s much to applaud with Compuware’s mission to modernize and retool the application development lifecycle on the mainframe and impart valuable new skill sets to a workforce that has been largely isolated from considering different approaches to the art of application development. However, beyond that DevOps veneer, you are still working with COBOL. If that’s where you want to be, go for it.

    As Shawn McCarthy, an analyst at IDC said later in the article: “… the challenge with older COBOL systems is that many were not designed to be extensible and everything that needs to be done has to rely on custom code”.

    And that’s essentially why no matter how much makeup you apply, COBOL systems on the mainframe will never be truly agile. Instead, for as long as they persist, they will continue to be an increasingly burdensome anchor that will slowly but surely impinge on an enterprise’s ability to compete.

  • October 10th, 2015

    EBP Architecture and Scalability

    Introduction

    Elastic Batch Platform (EBP), Elastic Transaction Platform (ETP) and Elastic COBOL runtime environments are designed for scaleability and high-availability in a public or private cloud environment.  First, we present the layout of a typical COBOL application (fig. 1) which divides the portions of the application into the typical 3-layer model:  (1) presentation logic, (2) business logic and (3) database logic.

    cobol.png

    Fig. 1. Structure of typical COBOL application.

    For online application the presentation logic may be a screen or GUI interface but with batch applications this may be categorized as the report writer section of the application.  The Elastic COBOL runtime environment provides the compatibility framework for running the application as if it is operating on a mainframe:

    • COBOL datatypes such as COMP-1, COMP-3, etc.,
    • COBOL file I/O, together with the ability to tie COBOL FDs to DD names of a batch JCL deck,
    • COBOL database I/O (e.g., EXEC SQL) interfacing the application to arbitrary SQL-oriented databases

    The entire diagram of fig. 1 is placed in the cloud Platform-as-a-Service (PaaS) deployment package, shown in fig. 2 as “Standard COBOL application.”

    deployment.png

    Fig. 2. The deployment Platform-as-a-Service (PaaS) environment consisting of ETP, EBP and cloud-specific facilities.

    The Heirloom Enterprise Deployment PaaS will provide the glue layers between the facilities the COBOL program expects to have available to it when running on the mainframe and what is provided by the underlying cloud platform.  The deployment or Heirloom PaaS  is deployed through Java EE Servers (e.g, IBM Websphere, Apache Geronimo) for ETP and/or Web Servlet containers (e.g., EMC tcServer, Apache Tomcat).  The Heirloom PaaS components consists of the following components:

    • SQL database mapping from DB2 to other databases, such as PostGresSQL or SQLFire.
    • Customer Web Portal which maps from either CICS BMS screen maps to Web 2.0 pages, JavaScript and XML (e.g., RESTful Web Services) or COBOL reports from the application that are also mapped to REST Web services that can be issued to the EBP services handler.
    • Monitoring and Operations Management (e.g., EMC Hyperic)

    The entire diagram of fig. 2 (called Heirloom PaaS Deployment) is placed in the private cloud Infrastructure-as-a-Service (IaaS) environment provided by the hardware environment, as shown in fig. 3.

    cloud.png

    Fig. 3.  The private cloud deployment Infrastructure-as-a-Service (IaaS) environment

    Each of the boxes in fig. 3 represent a virtual machine running in the IaaS.  In addition to the portions containing the ETP or EBP and customer applications (Heirloom PaaS Deployment), there are also VMs for the clustered database environment (e.g., PostGresSQL or SQL Fire).

    To achieve scaleability in the batch environment, the Heirloom PaaS VMs containing EBP are started as demand for batch resources increases.  The EBP starts when the VM it is running in is started.  As each EBP starts it registers or contacts the centralized Heirloom Elastic Scheduler Platform (ESP) which relays batch job submissions from an external scheduler (e.g., Control-M Linux Agent).  ESP also has the capability to define batch jobs and tasks (rules) for running these directly. Fig. 4. shows this interaction.

    arch-sched.png

    Fig. 4. The interaction between the ESP and EBP within the Heirloom PaaS virtual machines and external scheduling agents.

    Let’s take the example where an external scheduler injects jobs into the system.  As the batch job is injected by the external scheduler (e.g., Control-M Agent) which starts these steps:

    1. Control-M Linux Agent issues the Linux utility curl to submit jobs to the ESP via its Web services interface.
    2. The scheduler initializes the EBP with job classes and start job class initiators (job parallelism within an EBP)
    3. The scheduler submits the batch job to EBP via its WEB services interface
    4. The job executes and returns its condition code and output datasets to the scheduler which stores them for later review on NFS-attached drives.
    5. An indication of job success/failure and/or output datasets are returned to the Control-M agent.

    Scaleability and high availability is achieved by instantiating more than one Heirloom PaaS/EBP virtual machine within an IaaS frame and among multiple frames.  See fig. 5.

     

    scale.png

    Fig. 5.  Multiple JES virtual machines in a “JESPlex” cluster.

    Within each virtual machine, either an EBP subsystem environment is part of the tcServer / Apache Tomcat started tasks.  As needed (and following scheduler rules), one or more Job Classes are defined within EBP.  Classes contain attributes for the jobs that will run under them:  Elapsed and CPU time allotments, storage and network utilization limits.  Also following scheduler rules, one or more Class Initiators are opened under each class.  This allows a degree of parallelism within a virtual machine.  Then, as demand grows the vCloud management infrastructure (acting under more rules) will start additional virtual machines.  These VMs may be on the same IaaS frame or different frames.  Each new EBP registers with the ESP, as described in fig. 4, and begins operating on batch jobs sent to it by ESP.

    All batch shared data files (input and output datasets) are accessible to any VM via NFS. Shared datasets also contain the actual COBOL applications that will be executed by the Job Steps within the Batch Jobs executed within the Initiator.   Dataset locking information communicated among the EBPs prevents batch jobs with exclusive access to resources from conflicting with other jobs requesting the same resources.  Similarly, Input Spool (JCL), Output Spool (report SYSOUTs) and Temporary Spool (working files) are also shared among systems via NFS.

    Should a VM or its subsystems (e.g., EBP) fail, batch jobs are re-queued into the Input Spool and dispatched to other waiting EBPs.  In this way recovery is automatic.  EMC Storage Frame components will ensure that the data stores themselves are replicated for availability purposes.

  • July 4th, 2015

    COBOL Leads Us Back To The Future

    Heirloom PaaS in the News

    We’ve been recently featured in InformationWeek Article COBOL Leads Us Back To The Future By CURTIS FRANKLIN JR. Executive Editor, Technical Content, 6/21/2015

    COBOL defined business software development for decades. Now, is it over the hill or just hitting its prime?

    Elastic COBOL Elastic COBOL is part of Heirloom Platform-as-a-Service (Paas), an application development toolset that is a plug-in to the Eclipse IDE framework. Elastic COBOL allows mainframe applications (including CICS and JCL) to execute as Java applications. You can continue to develop applications in COBOL or in Java, or both, enabling the transformation to Java to occur at a pace that is optimal for your business. You can download Elastic COBOL for free. It is available on Windows, Linux, Mac OS X, Raspberry Pi and the cloud. That's right - Raspberry Pi. So you can get out there and build an enterprise accounting system on a platform that lives in an Altoids tin. As with so many of these compilers, Java (rather than machine code) is the target. People will argue about whether that's a good thing or not, but the fact is that it makes the compiler much simpler to write and maintain. So get out your soldering iron, dust off your COBOL, and get your Altoid tin running. Heirloom PaaS uses patented compiler technology to automatically transform mainframe applications into highly maintainable Java source-code, with 100% accuracy, while guaranteeing the preservation of existing business logic. (Image: Elastic COBOL)

    Elastic COBOL

    Elastic COBOL is part of Heirloom Platform-as-a-Service (Paas), an application development toolset that is a plug-in to the Eclipse IDE framework. Elastic COBOL allows mainframe applications (including CICS and JCL) to execute as Java applications. You can continue to develop applications in COBOL or in Java, or both, enabling the transformation to Java to occur at a pace that is optimal for your business.

    You can download Elastic COBOL for free. It is available on Windows, Linux, Mac OS X, Raspberry Pi and the cloud. That’s right — Raspberry Pi. So you can get out there and build an enterprise accounting system on a platform that lives in an Altoids tin.

    As with so many of these compilers, Java (rather than machine code) is the target. People will argue about whether that’s a good thing or not, but the fact is that it makes the compiler much simpler to write and maintain. So get out your soldering iron, dust off your COBOL, and get your Altoid tin running.

    Heirloom PaaS uses patented compiler technology to automatically transform mainframe applications into highly extensible Java source-code, with 100% accuracy, while guaranteeing the preservation of existing business logic. Read more…