With Cobol2Cloud, Eranea becomes partner of CloudBees to evolve Cobol applications for Cloud Computing

Lausanne, november 20th, .2012 – Eranea announces today its entrance the eco-system of CloudBees (Los Altos, USA) as « Verified Partner » with Cobol2Cloud. This solution allows the automated migration of business applications written in Cobol to Java, making them ready for Cloud Computing.

CloudBees is the undisputed leader of Java infrastructures delivered through “Platform as a Service” (PaaS). Her vision is to free developers from management and operations of those infrastructures and to allow them to leverage all benefits from Cloud Computing. The PaaS services by CloudBees allow the developers to remain exclusively focused on the functional improvements of their software assets.

The solution Cobol2Cloud by Eranea is an extension her technology toward Cloud Computing. This technology has already been deeply validated on large migration projects running on the private infrastructures of her customers. Those projects span a wide range of industries : finance, media, administration, software publishing, etc.

Active Cobol applications still represent huge investments : public figures say that 200+ billions of lines of Cobol source code are productive in the world. 2 millions programmers add 5 new billions each year. Consequently, we are all, as consumers, « in touch » over 10 times a day with a Cobol application (phone call billing, credit card purchase, e-commerce and e-services, etc.)

The combination of technologies of the two partners, Eranea and CloudBees, brings forward those very large software assets : legacy Cobol applications can be « mutated » via automated transcoding toward Java + html/ajax and Cloud Computing in a very efficient manner. Then, they can run optimally on the infrastructures RUN@cloud by CloudBees (application servers : Tomcat, JBoss, etc.) The generated Java source code becomes, after the migration, the new source code base of the transformed application. It is maintained with state-of-the development tools and can easily leverage all benefits of Java / JEE.

 «Eranea makes Cobol applications Cobol compatible with cloud computing in a native and standard manner. CloudBees provides corporations with optimized tools and services to develop, test and operate enterprise applications in total respect of current standards : Java for the development and Amazon Web Services for the infrastructure. Since we got in touch with Eranea, it became obvious, that CloudBees is the optimal test and operations cloud platform for the applications transformed by Cobol2Cloud» says François Dechery, VP International Business Development of CloudBees.

 The Paas services by CloudBees delivers to those migrated applications all the benefits of Cloud Computing : maximal flexibility, smooth growth and, last but not least, optimal costs. In traditional projects on private infrastructures, savings brought by this modernization exceed very often 80 % of the TCO of the original system !

«The combination of technologies by CloudBees and Eranea represents an optimal package to enterprises willing to enter Cloud Computing while protecting their huge past investments in homemade applications reflecting their distinctive know-how and competences » says Didier Durand, co-founder of Eranea.

This partnership between Eranea and CloudBees generates a huge potential of savings and further investments in modernization of legacy Cobol applications. This technological shift will most probably extend the life of those applications for a couple of decades, to be added to those they’ve already run through !

More informations :

Posted in Uncategorized | Tagged , , , | Leave a comment

Presentation at Jazoon 2012 – Automated migration to Java of large business applications

Yesterday, we presented at Jazoon 2012 (Zurich).

Our topic was “key success factors” when migrating large business applications to Java and Linux.

We emphasized all the lessons learned from previous projects:

  • full automation is a must
  • iso-functionality is key both for end-users comfort and for smoothness of migration
  • generated code structure must match expectations of existing teams

…. and more: have a look at our presentation herebelow to get all of it

Posted in Uncategorized | Tagged | Leave a comment

Large Cobol transactional systems migrated to Java: impact of garbage collection

This post is part of our serie about transcoding. Read them all via the link “Transcoding” to get an understanding on how we proceed to convert an application 1oo% automatically from Cobol to Java.

Cobol to Java for large systems

We always migrate (very) large applications with advanced Cobol programs: we have worked on assets ranging from 4 to 12 millions lines of Cobol source code. Those big applications are in use in large corporations: many thousands of users process data at very high transactional rate. Some programs used in those transactions may have thousands of variables and a single transaction can chain the calls to tenths of those programs. Total activity may range in millions of transactions per day.

This article explains how we handle the usual trade-off between memory consumption and cpu cycles burning in our runtime framework. It is all about reducing the impact of “blocking garbage collection” in Java which means thousands users stopped in their business activity.

Java garbage collection: principles

Java librates the programmer to manage object memory deallocation: the JVM does it by himself by recognizing the objects that are fully dereferenced (i.e not pointed to by any other live object)  and by freeing  corresponding memory for further reallocation. This is a big relief for programmers and a nice booster for their productivity.

But, somebody has to take care of freeing the memory corresponding to those objects that are no longer referenced by any other object. That the mission of the Garbage collection in Java Virtual Machine. Some of this housekeeping is done is the JVM by system threads running in parallel with application code. This is usually called “non-blocking” garbage collection. [For more details about garbage collection, the corresponding Wikipedia article is a great starting point]

But at some points when it gets really messy and overcrowded, the JVM has to stop the application code in order to compact as much as possible the active objects in memory in order to make new allocations simpler and more efficient. Its means moving the objects still active and changing their pointer within running code. Hence, the full stop required for application in order to avoid transient dangling pointers. This kind of garbage collection is usually called full GC.

Impact of garbage collection on user productivity

In our first project , we did not initially take care of garbage collection, especially full GC, considering it as it a natural and acceptable tribute to pay to teh relief of memory management by the system. But, when loaded increased to a few hundred users, we had to rethink it thoroughly as we would experience 20 to 30 period of full GC of 20s to 25s during office hours: when you do the full maths, it means hundred of hours of work lost collectively per day by thousands of users just waiting the traditional hourglass icon to disappear…

It was by then Java 5,  now the standard is Java 7 where lots of progress have been done to reduce full GC: this article by Oracle explains how.

Even with this evolution across Java version to reduce GC to a minimum, it still happens on live systems.

Garbage collection under control

So, over time, we developped our strategy to reduce full GC to a very strict minimum: just a couple per day and they happen mostly at period of very low activity. In fact, most happen very early in the morning when no user is already active: the cleanup by the garbage collector is done right after system start.

Why is it so ? Because, we rely heavily on our own very controlled object allocation:

  • the transcoded application code in Java does not allocate any new object: it relies on the Vars objects allocated by our runtime in the transcoded working storage section (see our other post on Cobol variables for more details).
  • Those Vars objects belong to the category of  “managed objects” in our framework. Most objects needed to represent Cobol programs and their processing activity also belong to this category: WorkingStorageSection, CobolProgram, LinkageSection, SQL connections, user terminals, etc.
  • all managed objects are handled by a class acting as a very usual object factory (more details on the Factory pattern in this article of Wikipedia). A cache of already allocated but currently free instances of a given managed class is part of this factory.

When a request of a new instance of a new object of a given managed arrives to the factory, it first checks if any free instance is already available. If yes, it is given back right away (after re-initialization of data values) to the requester and if not, a new instance is created.

The cache is initialized at system start with a number of instances given as a parameter. On the other side, we collect via MBeans conforming to  Java Management Extensions, also known as JMX (more details in this Wikipedia article), lots of numbers and statistics around the activity of the factory and its associated cache. After just a few days of observation, it is very easy to determine the right value corresponding to the “average maximum” of simultaneous instances of a given managed class.

By using this value in the property file of our runtime framework, we obtain a very simple way to minimize the full GC on the transcoded application during office hours:

  • the initialization of our runtime framework happens at JVM start (let’s say at 05:am)
  • all the instances of objects of managed classes defined by our initialization parameters are created by the factory and stored in its list of available instances
  • this initialization sequence is fairly intense on memory fragmentation while (tens of) millions of new objects,  some of them transient, are allocated
  • while it happens, the normal GC activity happens (even full GC) but it’s ok because nobody is there
  • when it’s over, the application is ready to run with caches full of needed instances of variables, working storage sections, program descriptions, etc. and the JVM memory space is rather clean and compacted because the GC could do the needed memory sweeps and object compactions with no harm as nobody was there.

So, when people arrive, even all at once, the GC activity remains very  low because all needed objects are allocated: the factory just uses the objects that are ready in its list.

Of course, our strategy is a clear application of the usual time vs memory computing trade-off: in order to save time, you have to spend more on memory.

This memory may have been an issue in the past with proprietary systems where memory was extremely expensive. But, as our favorite and recommended target for migration are x86 servers with very cheap components, adding 1 gigabyte of memory to maintain user productivity is no longer an issue !

Posted in Transcoding, Uncategorized | Tagged , , , | Leave a comment