FAQ

 

  1. Your question is not answered in this FAQ ?
  2. Why transform to Java and not rehost legacy (Cobol) application ?
  3. Why transcode to Java and not rewrite from scratch
  4. Why do you put so much emphasis on incremental mainframe migration ?
  5. How do you achieve this incremental migration ?
  6. What are the advantages of full automation ?
  7. What are the advantages of strict iso-functionality ?
  8. Why do you emphasize so much automated testing ?
  9. Why did you put so much energy in the development of NeaControlCenter ?
  10. How do you adapt to specific situations : “exotic” technologies, unsupported languages / constructs ?
  11. Why did you choose Java ?

[back to top of FAQ]
1. Your question is not answered in this FAQ ?     

We can always provide additional details on any point of our solution if it is not answered below.Please, reach us via a mail to contact@eranea.com.

Beyond words, we can also make demos of our solution at any time. Just get in touch !

[back to top of FAQ]
2. Why transform to Java and not rehost legacy (Cobol) application ?

Mainframe applications are written in legacy languages : Cobol is the flagship of those languages.Even though still very present (> 200 billions lines of source code still actill, Cobol and other legacy languages are perceived as obsolete in the world of x86 servers, linux and cloud computing.

Additionally, the developers mastering these older languages are a scarce resource and the rarity augments day after day as many of them go on retirement. Young engineers coming out of the university don’t want to learn those “oddities”. So, CIOs having a mainframe in-house face a serious recruiting challenge !

This non-affinity between legacy application language and cloud computing gets solved by transcoding to Java : the application is not only moved to a new hardware and software platform but its source code is moved to a new programming paradigm (Java) allowing subsequent leverage of object-oriented design.

This shift to Java unleashes also a massive potential of improvement for the transcoded application :

  • many things impossible in Cobol/3270 become deadly simple in Java/web : graphics, attractive user interface, web services (rest / soap), easy interconnection with 3rd party systems, etc.
  • tons of 3rd party packages (open source or not) can be easily interfaced with the application to complement it or even replace some of its parts.

Doing pure Cobol rehosting doesn’t solve the strategic issues above : the application is still in Cobol afterwards with all the limitations of an obsolete and dead language conceived at the time were processors could not cope with the flow of instructions needed to deliver the same level of abstraction and delivered delivered by Java.

Though, Cobol rehosting and Java transcoding are comparable on the cost side : both allow to achieve big savings (above 80% and even 90% sometimes) when compared to initial mainframe costs.

So, they can be compared on their tactical aspects but Java is far better than Cobol on the strategic objectives for the longer term.

[back to top of FAQ]
3. Why transcode and not rewrite from scratch ?

Prior to the start of our projects, there is always a decision between mostly 3 alternatives :

  • buy : the most direct alternative. The customer decides that off-the-shelves business software provided by some independent software vendor matches his needs. So, he migrates his data to this new software (running on non-mainframe) and stops the mainframe
  • rewrite : the most comfortable alternative. The customers has the financial resources to hire a team of new developers and start the development of a new software incorporating his latest business needs and leveraging the most recent technologies
  • transform :the most sensible approach. The customer decides to go with his existing (usually home-made) application and to improve it incrementally.

“Transform” is usually the most sensible approach from several perspectives:

  1. the home-made application is usually specifically tailored to the exact needs of the organization to maximize its efficiency as well as the value of the service / product delivered to customer. This mission-critical application is then clearly a competitive asset that allows to protect and even gain market shares. In the “buy” approach, the customer ends up using the same software as its competition, losing all advantages of tailored processing serving specific market needs. So, organizations that want to keep distinctive capabilities over competition tend to stick to their ow application to avoid the use of “1-size-fits-all” software
  2. the home-made applications usually accumulate the intellectual property of a couple of decades of real-life experience and practice. Consequently, it can handle properly all the situations encountered by the business, even those that are theoretically supposed to ever happen ! Rebuilding those capabilities from scratch (“build” approach) usually takes much more time and money than initially expected. Getting a fully-fledged but standard package (“buy” approach) to reach this goal is also unrealistic : the standard package usually needs significant customization to reach this proper level of reliability and functionality when applied to a specific corporation.
  3. the two previous point clearly demonstrate that such a home-made application, composed of (tens of) millions of lines of source code, is clearly a huge asset for the corporation that has been developed after massive investments. Most often, it represents hundreds to thousands of men-year of software development. As such, this investment must be protected : the engraved intellectual property is most often still fully valuable. But, it needs to be transported to a new technological platform (operating system, programming language, user interface, etc.) in line with current IT market in order to be able to extend its life for another couple of decades.
  4. this move to a current technology platform also brings massive savings : the mainframe technology is getting old not to say obsolete. It means then that it is much more costly than recent machines (x86 servers) which delivers same performance and same reliability for a tiny fraction of the costs of a mainframe.
  5. this obsolescence is in itself an issue in itself : support from suppliers is getting reduced, developers and sysadmins are getting scarce, etc. It can be dangerous for an organization and its CIO to rely on such a platform for its mission-critical applications.

So, “Transform” makes much sense for our customers : it allows them to keep their competitive weapon represented by a software accumulating all their distinctive intellectual property. All this IP is just transported to the most recent and standard technologies where it can live and be functionally improved for twenty additional years.

[back to top of FAQ]
4. Why do you put so much emphasis on incremental mainframe migration ?

A mainframe is a very complex and large scale system : it hosts many applications, representing up to several tenths of millions of lines of Cobol, and serves (tens of) thousands of interactive users.

At that level of complexity, any Big Bang approach (= all applications and all people transferred to x86 at once) is doomed to failure for such a big system. It is impossible to prepare and test all subsystems and applications at once without forgetting some points and foreseeing some issues. We all know stories about such 1-shot migration that always end badly !

That is the reason why Eranea fosters a very incremental approach where each user can be migrated from mainframe to the x86 cloud independently from all others. Same thing for batch job : each of them can be migrated independently from all others.

This very granular level of incrementality is due to the smart combination of full automation and strict iso-functionality (see other FAQ questions).

The consequence of this incrementality is that the duration of the project can be adapted to customer needs, even taking into account timing constraints external to the transformation to x86. If so wished, the migration can be extended over many months to cope with the targets of the client : start slowly with some pioneer users to gain confidence and acquire operational knowledge of the new system and then ramp up the speed to transfer larger chunks of the user population

Of course, our “continuous transformation” (see other FAQ question) allows both sides, mainframe and x86, to always be at the exact same functional level : so, the duration of the project doesn’t matter for end users as they all, transferred already to x86 or not, work with strictly identical software from a functional point of view.

[back to top of FAQ]
5. How do you achieve this incremental migration ?

The core of our technology (NeaTranscoder + NeaRuntime) relies on 2 core features :

  • full automation
  • strict iso-functionality

By full automation, we mean that we don’t touch “by hand” the Java / web code produced by our technology. Everything is fully generated by our system.

By strict iso-functionality, we mean that the Java code that we produce out of the original legacy language (mostly Cobol) delivers the exact same results up / down to the very “last bit” of produced data. The nice consequence is that mainframe users and x86 users can share in real time (“live”) the same database as results produced on either side are not distinguishable. This data sharing is achieved during the migration thanks to a remote JDBC (+ DRDA when legacy database is IBM DB2) connection from Java on x86 to legacy mainframe database.

By using the advantages of iso-functionality (see other FAQ question) and advantage of full automation (see other FAQ question), we can extend the duration of the transformation without stopping the business evolution of the applications : the development teams can continue the changes to the application while the end users are being migrated because the latest improvements on legacy application (Cobol) can be “on the fly” ported to Java via the next automated transcoding.

The resulting migration is then very smooth and fluid : each user / job is ported to x86 at the most appropriate moment and, at any point in time, being on x86 or mainframe does not matter as functions are strictly identical.

[back to top of FAQ]
6. What are the advantages of full automation ?

Being able to convert millions of lines of source code (Cobol) to Java as well as 3270 screen definitions to web/html in a fully automated manner has several advantages :

  • homogeneous quality : transcoding is done algorithmically by a software. So, it produces identical results with constant quality when you repeat the operation on same or different code assets over time. This homogeneity is a big advantage when compared to manual work by teams of human programmers.
  • costs : it doesn’t cost anything regarding human resources to do and redo this transcoding for a given application. The only resource involved is just some cheap x86 computing power.
  • speed : it is a fast process. A simple standard x86 can transcode an application at a speed over 1 million lines of source code per minute. So, as example, in one of our projects, it took less than 15 minutes to transcode, compile via javac (= standard Java compiler) and package (as .jar files) a core banking system with over 10 millions lines of source code.

Those advantages are those that allow to continue application maintenance over the extended migration period : our “continuous transformation” engine (based on Jenkins continuous integration and other DevOps tools) allow the mainframe developpers to trigger a new transcoding each time they commit a new Cobol version so that Cobol and Java at same functional level get automatically put into production at the same point in time.

Our process of continuous rollout will then automatically take the new Java .jar / .ear files and install them on each application server (Tomcat, JBoss, etc.) in the x86 cluster set up to replace the mainframe.

That is how we deliver on our promise of incremental and smooth migration (see other FAQ question).

[back to top of FAQ]
7. What are the advantages of strict iso-functionality ?

We define strict iso-functionality by the ability to deliver exact same results in Java as original results on the mainframe to the “very last bit”.

And we really mean it : let’s say that you use Cobol to its numerical limits by computing on numbers with 31 significant digits. In that case, we will also use Java classes allow the transcoded application to compute on 31 significant digits (in essence, even beyond the capacity of standard Java packages like BigDecimals !) to get exact same results when compared to original Cobol results. We use same computing rules (rounding, etc.) to make sure that any advanced financial computation (compound interests over N years, etc.) will produce same results up to 31st digit.

In the same objective, we respect fully the original legacy (Cobol) semantics : memory mapping, hierarchical data structures (Cobol levels), implicit type conversion (string to number and vice versa), specific data representation (Comp-3 numbers), collating sequence, memory pointers (POINTER, ADDRESS OF, etc.), etc.

The reason why we go to such extent in the respect of legacy semantics is very important : our strategy for migration is incrementality (see other FAQ questions) with extremely small granularity (1 single user at a time if needed).

To achieve this incremental approach, we need to be able to share the same database between all users (already migrated to x86 / Java or not) in real time : the functional prerequisite for such a live data sharing is clearly to produce the same results for a given computational process being run either in Cobol or in Java.

When this strict iso-functionality is reached, it becomes irrelevant if the end-user is working still on the mainframe or already on Java : database updates are strictly identical. So, the migration becomes “simply” the incremental shift of the end-users and batch jobs from mainframe to x86 until 100% of then workload has been transferred. Then, database is migrated to x86. Finally, the mainframe can be switched off.

The iso-functionality also has advantages from the end-user perspective : the interface to the system is strictly identical. No training and no loss of productivity ! The end user stops the 3270 terminal emulator and starts his browser with the given URL and he’s ready to work again with same productivity : screen layout is identical, keystrokes are the same and inter-screen chaining within a global transaction hasn’t changed. So, end users see nothing and the migration is a non-event for them (in its most positive meaning !).

iso-functionality also makes transition invisible on the longer term : for applications, producing data reports over many years, the system transformation does not produce any change in the values so business trends over a long period remain coherent without any bias introduced by the new technology (x86, Java, etc.)

Finally, iso-functionality is key is the project itself : it defines the target of the migration in an extremely precise manner. You just get the exact same result or you do not ! No intermediate fuzzy status… That brings very objective and fluid validation procedures.

Reaching identical results makes then testing easier : you capture reference scenarios on mainframe, replay them on x86 and validate equality. If not, solve the issue to get equality. That is the basis of our automated test systems to validate non-regression over the course of the incremental migration.

[back to top of FAQ]
8. Why do you emphasize so much automated testing ?

We are keen on (automated) testing because it is the only way to guarantee top quality and flawless migration in our large projects.

Iso-functionality defines clearly the target : obtain exact same results for reference scenarios captured on the mainframe and replayed on Java.

Java code coverage with tools like Cobertura then allow us to quantify the relevance of the captured tests : we can produce global or detailed (program per program) numbers showing the current coverage reached by a given set of scenarios.

This capability is used by our customers to define their testing strategy : “let’s capture test scenarios up to 90% coverage” is a usual decision. So, they will capture scenarios until the code coverage shows that at least that 90% of lines of source code have been “stimulated” by the tests.

And, it is not such a big task : in a recent project, it took 3 weeks to 2 persons to capture 3270 test scenarios (representing 35’000 screenshots) for a financial application representing approximatively 7.2 millions lines of Cobol (CICS, DB2, 3270 on z/OS).

(For the geeks, those scenarios are captured via the 3270 HLLAPI for screen displays and keystokes)

Those test scenarios (stored as xml files in our system) are then replayed automatically in Java with the same intial data in the database to validate the equality.

When this status is reached, we then replay automatically every night to make sure that non-regression is still valid even if technological changes were made recently.

It means that the incremental migration is started with tests guaranteeing 90% code coverage. It results in a very smooth and flawless workload transfer.

At the heart of this practice is our automated testing engine NeaTesting depicted on slide XX of our solution presentation.

A side benefit is that those test scenarios can be kept as an asset as used to test Java against Java rather than Java against Cobol : after the end of the migration itself, they are used to validate that technological (Java release, Linux versions, etc.) upgrades do not break the application. If those scenarios are updated over time, they can even be used as the basis to validate the quality of the application across its new functional releases.

[back to top of FAQ]
9. Why did you put so much energy in the development of NeaControlCenter ?

NeaControlCenter is our web application exposing in a user-friendly manner all the outcomes of our global solution across its various technologies.

To have an extensive view on it, the slide XX to YY of our presentation demonstrate various screenshots.

It gives to all stakeholders (developers, sysadmins, ops staff, project leaders, etc…) of a such a project a window on the information that matters to them. They can access all reports that we produce : code forensics, transcoding reports, test replays, rollout status, migration milestones, x86 systems activity, etc.

More than 200 various screenshots provide detailed or global information and figures on all areas mentioned above. Access to some functions can be restricted by the definition of ad hoc roles for users.

NeaControlCenter gathers in one place information and data produced by Jenkins, Subversion and all the x86 Java application servers (JBoss, Tomcat, etc.) replacing the mainframes.

Some of our customers will keep NeaControlCenter after the migration to pursue the use of all components : they usually appreciate the automated compilation / packaging / deployment / monitoring services that we provide and want to continue using them when they continue the evolution of their transcoded application natively in Java.

NeaControlCenter drills down to the lowest details : you can know the last screen displayed by a user, the program that sent it, the last sql statements that were processed in a given session, etc. So, it is a key tool to operate the new x86 private cloud smoothly.

But, NeaControlCenter also provide a global overview of the distributed system : on a single screenshot, you can see the current status (cpu, memory, etc.) of JVMs running the application. It acts a a global console for monitoring the health of the system.

You already have others consoles in place and want to continue using them ? No problem, NeaRuntime is fully instrumented via Java JMX protocol. So, if you prefer to see the the numbers or graph them on your standard console (Zabbix, Nagios, etc.), it is not a problem : those 3rd party consoles all support JMX and can access the thousands of probes / measurements available in NeaRuntime.

[back to top of FAQ]
10. How do you adapt to specific situations : “exotic” technologies, unsupported languages / constructs ?

We have developed internally 100% of our technology. So, we own the total source code and we have competences to adapt it as needed.

So, if your context is specific or if you use the technologies that we support (Cobol, 3270, CICS, etc.) in a way that we never encountered before, we can quickly adapt / extend our software to fix the issue and go on with your project.

Additionally, it gives us all flexibility to extend our technology to serve a particular need for a given customer.

[back to top of FAQ]
11. Why did you choose Java ?

Our choice of Java is exhaustive : it is the target language of the transcoded applications of our customers and it is also the language in which we develop our technology. This unified choice brings evident synergies in our solution : we reuse and leverage our Java competencies in both areas !

The initial choice of Java was a voluntary decision in the design of our solution for main reasons:

  • it brings the power of object-oriented programming to the evolution of customer application and to the design and evolution of our own technology
  • “write once, run anywhere” was the motto of the initial Java language designers : our tools and transcoded applications can run unchanged in various systems. We’ve run projects with Windows, Linux, System/z, AIX, Solaris, etc. as projects. Anyway, this portability is experienced daily by our customers : their developers usually maintain the Java that we produce on Windows but the application is run productively on Linux.

It makes our technology agnostic to operating system and underling hardware. We can respect any customer strategy as long as a Java Virtual Machine is available on the chosen platform.

But, nowadays, we strongly emphasize x86 with Linux as target when customer doesn’t wish any specific platform : it is clearly the most efficient platform from an economic standpoint. Its current price / performance ration is clearly undisputed.

Additionally, x86 is also the most innovative hardware environment : the Internet “gorillas” (Google, Amazon, Facebook, Twitter, etc,) have built huge datacenters on this processor / architecture. That’s were the market currently is ! Consequently, that is most innovation (performances, availability, energetic efficiency, etc.) currently happens.