2015-11-29

What's in a return code?

The last week I have mused on ‘What’s in a return code and what it is good for’. It started with the innocent question:
‘How do I see which job in a workflow that bombed out’?
‘The first job with result equals zero’.
‘There is no zero result, there are only ones and nulls’.

My job scheduler return codes are boolean 1=success, 0=failure’. It is not entirely true, the return code can be NULL, which normally means not executed yet. I decided to take a look in the log:

The first job without a return code is trunc_dsaldo, up until trunc_dsaldo all jobs have executed successfully (result=1), it turned out trunc_dsaldo was successfully bypassed, the boolean return code does not really allow for a third ‘bypassed’ condition. The registration of a bypassed job is bypassed altogether so it is impossible to tell a bypassed job from a not executed job.  

I like boolean return codes.  Either a job executes successfully or not, it could not be more simple if it were not for the bypass condition. In this particular case it was the next job dsaldo_read who failed, due to an infrastructure fuckup the job failed and the connection to the database log table was lost, so it could not register a failure. A very unlikely situation, but nevertheless it happened.  

What is the a return code good for?
The most obvious reason the return code should tell the result of a job? In this case it does not do that well. You can argue the result of a bypassed job is unknown and should be left with a Null return code, but you can also say it was successfully bypassed and should qualify for a successful return code, but a bypassed job can be seen as a failure. Right now I lean towards giving bypassed a unique non zero return code but keeping the boolean type. This approach keeps the boolean simplicity but has a side effect it indicates the job was successfully bypassed. I still do not know if this is a good thing or not. I have to scrutinise some unnecessary complex code carefully before I make any changes. If I decide to change the code I will rewrite job related ‘return code’ code, since it has been subject for some patching during the years.
Another and maybe the most important function of a return code is testability, for successor jobs to test the outcome of a predecessor, that has already been taken care of, you can set up a job prereq testing the outcome of a predecessor job example:
<job name=’successor’...>
  <prereq type='job' predecessor='previousJob' result='success’ bypassed='ok'/>  
The successor job will run if the execution of previousJob was a success or previousJob was bypassed.
But the job return code has not got the attention it deserves, it’s a nice way to say there is some odd logic and bad code lurking in my job scheduler concerning return codes. Maybe return code should be a class. I’m not much of an OO fan, but return codes are important and maybe deserves a class of it’s own.

2015-11-15

Extracting SAP projects with BAPI - 2

In the previous post I described how we cut the time of extracting projects data from SAP. In this post I will show the new improved workflow coded in Integration Tag Language where workflows are coded in schedules. The first part of the project's schedule looks like this:


This is just initialisation of constants and logic needed for the execution of the schedule. The second part:


Extracts the PROJ table from SAP and creates an iterator with all projects. The last of these jobs ‘generate_driver’ creates a PHP array called driver0 which is the iterator used to drive the BAPI jobs. The first set of BAPI jobs are the quick ones that runs on less than 10 minutes:


These jobs run in parallel, you only insert a parallel=’yes’ directive on the job statement. The iteratorn from the generate_driver job is declared by the <mydriver> tag and iterates the BAPI once for each project in the array0 iterator. The id of the project is transferred via the @PROJECT_DEFINITION variable, as you can see inside the <rfc> section. In the <sql> section the wanted tables are declared, default are all tables, BAPIs often creates a lot of ‘bogus’ tables so we explicitly state which tables we like to import to the Data Warehouse. The next job is a bit more complex it is the long running BAPI_PROJECTS_GETINFO job:


In part one we decided to distribute the workload over 9 workers, by doing so we need to truncate the tables upfront since we do not want our 9 workers to do 9 table truncations. First we create a dummy job as a container for our jobs, which we declare parallel=’yes’ so the job run in parallel with the preceding jobs. Inside the dummy job there is a table truncate job and subsequently the BAPI extraction job. Here the iterator array0 is defined with the <forevery> tag, the iterator is split up in 9 chunks which all will be executed in parallel. The rows in each chunk are transferred as before by the <mydriver> iterator which is given a chunk by the piggyback declaration. If you study this job carefully you will see there are some very complex processing going on, if you want a more detailed description I have written a series of posts on parallel execution. I am very happy about the piggyback coupling of the iterators, by joining the iterators a complex workflow is described both succinct and eloquent.
The 5th and last part of the schedule shows a job similar to the one just described, this time we only need to run the BAPI job in two workers:


If you take the time and study this ITL workflow you will find there are some advanced parallel processing in there reducing the run time from about one and an half hour to less than 10 minutes. But so far we have used brute force to decrease the run time, by applying some amount of cleverness we can reduce the time even further and make it more stable. I hope to do this another weekend and if I do I write a post about that too.

Extracting SAP projects with BAPI - 1

Some years ago I was asked to extract project information from SAP for reporting/BI purposes. I decided to base the extraction solely on BAPIs. I wanted to test the BAPIs thus avoiding writing ABAP code and/or tracing what SAP tables containing project info. It sounded like a good strategy no SAP development just clean use of SAP premade extraction routines.  It turned out to be quite a few BAPIs I had to deploy for complete extraction, first I started with the project list BAPI:
BAPI_PROJECTDEF_GETLIST to get all projects (if you are not familiar with BAPI extraction read this first). Then I just had to run all the other BAPI one by one for each project:
BAPI_BUS2001_GET_STATUS
BAPI_PROJECTDEF_GETDETAIL
BAPI_BUS2001_GETDATA
BAPI_PROJECT_GETINFO
BAPI_BUS2054_GETDATA
BAPI_NETWORK_GETLIST
BAPI_NETWORK_GETINFO
BAPI_NETWORK_COMP_GETLIST
BAPI_NETWORK_COMP_GETDETAIL
BAPI_BUS2002_GET_STATUS
BAPI_BUS2002_ACT_GETDATA
BAPI_REQUISITION_GETDETAIL
BAPI_PO_GETDETAIL


In the beginning it was fine running these BAPIs in sequence, very few projects only one company (VBUKR) using projects. Last time I looked it took about 30 minutes to run the routine, it was a long time but what the heck 30 minutes during night time it’s not a big deal. Last week I had a call from present maintainers of the Data Warehouse, “Your project schedule takes hours and hours each night. The code is a bit ‘odd’, can you explain how it works, so we can to do something about it”. To understand the ‘archaic’ code in the schedule first thing I had to do was to clean it up, replacing obsolete idioms with more modern code constructs others could understand. Then I split the original schedule into smaller more logical schedules, the first one consisting of:
BAPI_PROJECTDEF_GETLIST
BAPI_BUS2001_GET_STATUS
BAPI_PROJECTDEF_GETDETAIL
BAPI_BUS2001_GETDATA
BAPI_PROJECT_GETINFO
BAPI_BUS2054_GETDATA

took more than two hours to run.  A look into the projects data showed 16000+ projects belonging to more companies than I created the extraction for. Now we replaced the BAPI_PROJECTDEF_GETLIST with direct extraction of the SAP PROJ table selecting only the interesting company about 8000 projects and run the BAPIs in parallel  this brought down the execution time to about 1 hour 20 minutes. Analysing job statistics showed the three first BAPIs only took little more than 500 seconds each, BAPI_PROJECT_GETINFO 5000 seconds and finally BAPI_BUS2054_GETDATA about 1000 seconds. Distributing  BAPI_PROJECT_GETINFO on 9 workers and BAPI_BUS2054_GETDATA on 2 workers should make all BAPI execute in between 500 to 600 seconds. This is a balanced scheme and the execution time is acceptable, from over 2 hours to 10 minutes. In the next post I will show the new improved execution schedule.

2015-11-01

Understand the ISO 8601 date format

How hard can it be? It seems  to be incomprehensible hard for some to understand the ISO 8601 date format YYYY-MM-DD and acknowledge ISO 8601 as the international standard.
.


One example;  at the company we use Microsoft sharePoint collaborative software. As an american i.e. U.S.A company Microsoft is unaware of ISO standards e.g. date formats are national in sharePoint. The swedish date format in sharePoint is ISO 8601 since we swedes since long have adopted the SI/ISO standard. The US date format is not ISO 8601. The US sharePoint admins at the company are unaware of ISO and do not want to promote the ISO 8601 as default as they believe this is the swedish date format only, instead they use the US date format since they think the US standard is the world standard. If Microsoft could introduce the ISO 8601 as a recognised date format in sharePoint, it would be much easier for me to evangelize the benefits of using one recognised standard format as default for dates in the company. This will probably only happen after the Chinese have established a Chinese hegemony, their date format is sort of big endian in accordance with ISO 8601.



      

Good reading for programmers

Once in a while you stumble upon a great post, this one by Sean Hickey is also quite fun The evolution of a software engineer.