2012-03-28

My legs remember the pace, but my heart don't.

In early February this year a Belgian colleague on visit here in Stockholm, was surprised over the hordes of people jogging in town the evening he arrived, and asked me if there was some special event going on. No I said we are many runners here, me too I run between 5 to 15 km each morning. This was not entirely true, I had stopped at new year, each week saying I just have to much to do this week I start run next week. Well today after 3 months I started again. It is surprising how fast and how much you lose in such a short time. I never measure time distance etc., but I’m sure I was over 6 minutes per kilometer. And that is not running, it’s fine Sunday jogging. But I will try to get in shape by run/job at least 4 times a week and lose 5 to 6 kilo overweight fat. People tell me I’m not fat, but they have not seen me coming up from bed in the morning, I’m no longer the fit slim guy I once was.
 I like to run in the early morning, summer Stockholm  five a.m. is a fantastic, run for an hour in the forest occasionally see deers (I once run into an elk) and afterwards take a swim in the cool refreshing water of a lake just 3 kilometers from the office. Winter time it’s not so fantastic black sub zero temperature ice and snow occasionally being scared to death by a deer. But now spring is here and I feel great about finally being back on track again, only my legs remember the pace but not my heart.          

2012-03-27

Business Intelligence and Management


What is Business Intelligence?

Business intelligence (BI) is both intelligence as in information gathering, and intelligence as in intellectual capacity. It is a process to gather and compile business data into information. This information is then analyzed to support intelligent or at least better decisions. Often the decision is apparent just by looking at the auto-compiled information.  The BI information is often visualized as key performance indicators (KPI), the measurements we choose to monitor and quantify how well we run the business. BI keeps track of both short time measurements as stock values for day-to-day operational support as well as more long time financial measurements for strategically decisions. BI is also used for finding correlations between e.g. advertising, customer loyalty and sales. Sales leads can be found by e.g. searching for customers who have bought spare parts but not service. Since BI systems contain information from many sources they can often give the complete picture of single entities like a product. With these capabilities there is little wonder BI systems are recognized as a valuable asset and is becoming more and more a management tool.

Classic Business Management

What gets measured gets managed, or what gets managed gets measured. Measurement is intrinsic to management. Not all aspects of management can be measured. Good management includes culture, mentorship, supervising etc.  Business management is the aspects of management based on tangible measurable facts from production and sales. Traditionally much effort has been spent on manual information gathering and analyzing business activity to conceive better production and sales strategies. By quantifying measurements and using their business skills which they subsequently acted upon.

           Measurement -> Quantify/Think -> Act 
Figure 1 Classic conceptual business management model

 The old fashioned manager

Traditionally a good manager has been a reasonable intelligent person, with very good understanding of the business. They needed experience to cope with changes in the world. The manager had to have all important information on top of his head. Managers of large corporations needed to have exceptionally good memory [1] . But they also had to have less tangible qualities like intuition and good judgment. In essence they had their own business intelligence system in their head. ‘Good old’ managers were precious since they had to master so many skills.

The new business management.

Management by measurement, measure and rally the masses around one KPI at the time. With the introduction of modern BI system [2] , capable to process gigabytes of information in seconds, business management is changing focus. Managers no longer have to spend time memorizing and analyzing data, since BI systems compile data into simple KPIs. It is becoming more important to rapidly convey alarming KPI to the workforce so the entire organization or company can attend problems instantly. Modern team/production leaders frequently assemble the teams around visualized KPI boards. Top management use mass meeting to mobilize the company around important KPIs [3] . This is an efficient way of instantly deliver orders to the organization [4] .  Knowing the BI systems [5]  is necessary for understanding and managing the company. Opinions not based on the common information will not be understood or listened to.  Actions not based on the common information are likely to be misunderstood and carried out poorly. Reports will only be acknowledged if they are created from the company BI storage.

          KPI -> Mobilizing -> Act 
Figure 2 New conceptual business management model
The modern manager  
With the introduction of modern BI system, capable to process gigabytes of information in seconds, the role of the manager is changing. With the BI in the laptop there is no need to have the business intelligence inside the head [6] . The modern manager must have proficient knowledge of the BI system, and be a great communicator conveying KPIs to the organization. In future the manager will be a generalist. Since business knowledge is easily attained from the BI system, the manager can more easily be redeployed from one business area to another. The future manager will have more time for managing the business. [7]

[1]  ‘Why do people read fiction, when there are balance sheets?’ - The late chairman of Atlas Copco Marcus Wallenberg jr was one of many big enterprise leaders with a photographic memory.
[2]  The spintronic revolution in the late 1990thies made it practically possible to create hardware infrastructure for OnLine Analytical Processing.
[3]  One interesting aspect on KPI. Who is responsible for creating and interpreting the KPI? This is clearly a management task. KPI is in a sense the essence of BI and worth a paper of its own.
[4]  This new management style is often confused with indecisiveness and an anxious wish for consensus instead of leadership.
[5]  Managing and knowing how to change the BI system is equally important.
[6]   You can argue if judgment and intuition can be found in BI systems. But good analytical and visualization tools will at least aid both intuition and judgment.
[7]  Yes I know the spelling on the pictures is not correct. And I pinched the pictures from the web, 'the modern manager' is a photoshop collage.

When fast is not enough

When fast is not enough

Tree Structures

T h e other day I was reading an interesting article about three structures Moving Subtrees in Closure Table Hierarchies . The article describes a design pattern for tree structures, which makes it easy to create hierarchical ‘tree reports’ as BOM (Bill-Of-Material)  lists and their counter parts where-used lists. Trees are everywhere, not only the BOM structures, they exist in accounting and HR defining organization and reporting structures, genealogical trees etc. Tree structures are a natural way of visualize structures of the real world. There are some problems with tree structures, they are complex to manage in most programming language and more important they do not fit in the relational database model, which is based on the mathematical theory of sets, where there is no order or structure [1] .  Tree structures are also a formidable problem for the human programmer [2] .

Tree structures and Business Intelligence

Apart from the challenge of creating and maintaining tree structures, the storage model has to be performant.  Analyzing via three structures is very common ‘what components are most critical for next week production?’, ‘what happens if we replace part X with Y and Z?’  Almost all financial reports are ‘sum thrus’ of tree structures. At period closing efficient tree structures are essential for fast and expedient reporting.

When we introduced tree structures in our business intelligence application ‘The Data Warehouse’, we put in a lot of effort into create a simple yet effective tree structure design. After some initial testing we decided to try a modified   traversed tree structure  design pattern. I did a prototype where each node in the tree structures is stored as separate rows much like the above mentioned ‘closure table hierarchy’. We also store the top node in all rows. This makes it easy and efficient to query the tree structure. We do not store any pointers for ‘SQL exploding’ the tree, we rely on the physical order in the database table. This is an ugly hack, and is in breach with every database design guideline I ever read or written.   (The traversed three structure model contains pointers that enable plain SQL to navigate up and down in the tabular tree, but we never implemented them in our prototype.) The upside of our tree structure, it’s simple, performant and easy to work with. The downside it’s very hard to update! We do not update our tree, we always recreate the entire tree table.


Prototype Traversed Tree Model – One Example  

Part/Material 0909100260 is a lubrication kit , shown in Figur 1. As you can see 0909100260 consists of five components or sub nodes.  

Figur 1 - Tree structure for 0909100267

Figur 2 shows the table representation of 0909100260 plus the SQL query to select the database rows. The relevant columns are:

  1. T_MATNR        The top node
  2. P_MATNR        The parent node        
  3. C_MATNR        The child node
  4. TREELVL        The tree structure depth
  5. TREEQTY        The tree structure quantity

SELECT * FROM `DB3_BOM_BASIC_TREE` WHERE `T_MATNR` = '0909100260'

Figur 2 - Table representation of tree structure for 0909100267

SELECT * FROM `DB3_BOM_BASIC_TREE` WHERE `T_MATNR` = '4150166760'

Figur 3 - Table representation of tree structure for 4150166760

As you can see in Figur 1 there are actually two structures in there. First we have the entire structure with top node 0909100260 and in this structure we have a sub structure 4150166760 which consists of two sub nodes shown in Figur 3. This table representation of a tree structure is both simple and fast to query. There are basically two problems with the prototype, create it and update it.

Create the prototype traversed tree structure

We receive a tree structure from SAP in the form of the adjacency list model  from SAP see Figur 4.

SELECT * FROM `DB3_BOM_BASIC` WHERE `P_MATNR` = '0909100260'

OR `P_MATNR` = '4150166760’

Figur 4 - Tree structure for 0909100260 according to SAP

Looking at Figure 4 two things become apparent, first for those who knows SQL, it is very complex to write an SQL query to show the tree 1. Not only is it complex to traverse up and down the tree it is also inefficient. This tree representation is not good enough for performant business intelligence applications [3] . The other notable thing, there is less rows. Our prototype tree structure store a lot more rows. AC Tools store 421.131  design BOM rows in SAP, these rows become 1.787.016  when we transform them into the prototype traversed format, luckily it is just disk space we waste, and we happily sacrifice disk space for performance.  

A normal data warehouse ETL  process extract, transform and loads the SAP tree structure into our prototype traversed notation. Figur 5 shows a part of the job that transform and loads the design BOM, it consists of three jobs:

  1. createTopNodes         list all material that do not have any parent nodes
  2. explode         recursively find all child nodes of the top nodes
  3. loadit                 load the result from explode into the Data Warehouse

At the time we introduced the tree structure in our Data Warehouse it took some 40 minutes [4]  to run the explode  job, the other jobs are negligible. But what the hey,  40 minutes is not bad, it’s some heavy processing going on here and we only load the structures once each night. We decided to use the prototype for a while, as it was very fast and simple for reporting. And we didn’t have a problem with the nightly recreate of the entire structure table.

 

Figur 5 ETL job Transform & Load SAP tree structure

Network impact

The SIKLAN project replaced our network switches with new 100 Megabit switches. Before we had our servers hooked up to Gigabit switches which are ten times faster [5] . Now we had a network setup that looked like Figur 6.

Figur 6

Our tree structure transform procedure shovel a lot of data between our ETL and Database servers and the impact of the slower switch was notable (from about 40 minutes to 60 minutes). This was a move in the wrong direction so we rerouted the data traffic through a special Data Warehouse switch, a D-Link Gigabit switch for about 120€. See Figur 7.

Figur 7

After rerouting the data traffic we were back to normal execution times again, which soon dropped to 30 to 20 minutes due to server and software upgrades. Now this got me thinking, ‘if these changes affect the execution time that much, why don’t we try to do something clever about the explode  job (in Figur 5), the job that consumes all resources. In the explode job there is an iterator , the < forevery > that iterates this job for every top node coming from the job createTopNodes. Now this iterator has the capability to chop itself up in chunks and run them in parallel following the map and reduce  design pattern [6] . And this is exactly what the reworked version of the explode job in Figur 8 does. Now the explode job chops the top nodes into chunks with 1000 entries in each chunk. These chunks are run in parallel but not more than 7 processes at a time. Why not more than 7 processes? Well I measured this job and 7 parallel processes seemed to be the optimum [7] .  When all chunks are processed, the < exit pgm=’reduce… > assemble all transformed structures into one file that is handed over to the last loadit job (in Figur 5). Now we are down to less than five minutes for the entire transformation process! This is more than fast, it is phenomenal.

Figur 8 Transform evenly chunks in parallel

But we can do better, once you start optimize a process, you soon come close to the edge of meaningless sub optimization, and this is what Figur 9 is about. Here we use knowledge about material number distribution and structures. The chunks in our iterator can vary in size, this pattern of progressively smaller chunks chops another minute of the execution time down to four minutes!  This chunk-size pattern is far from ideal, it’s just better than even chunks, and it is the best the iterator can do out of the box. With little trouble we could create a job that produce a better optimized chunk-size pattern, but that would be going over the top… (at least for now).

There is always a danger doing this kind of optimization, when conditions change the assumptions may be invalid, but it’s good to know you can do more if needed.

Figur 9 Transform 'optimized' chunks in parallel

At the moment of writing I think the present bottle neck is the D-Link switch, you just do not get a super-duper switch for 120€, in a simple test I did it was capable of deliver 600Mb/s, this is certainly more than the SIKLAN 100Mb switch, but that high quality switch deliver 100Mb. Sometimes you only get what you pay for, I’m convinced a pricy Gigabit switch like those we had before would do better than the D-Link. Then we have the database server, we have done no tuning at all on that server. But today it does not matter, we are very fast and that gives us the luxury to use an awkward tree structure that is very fast and easy to work with for reporting and that is what matters at the end of the day.  And yes it is still called prototype  traversed tree structure.

 


Real life Example

In this example a SQL SELECT joins three tables:

  1. DB3_BOM_PROD_TREE        - 880269 rows (= nodes in the production BOM)
  2. DB3ARTSTATDAY                - 74947   rows
  3. CB1ARTSTATDAY                - 34570   rows

As you can see the result table contains 32116 rows and the query took 0.8272 seconds to execute.

What is the query is good for? It is used to help to decide at which assembly lines materials should be stored in our Tierp factory .

More examples of my PHP job scheduler can be found here .



[1]  When I first learned about relational databases, my initial reaction was ’show me the tree, without decent tree support this will never fly’. Since then I have learned ways to deal with trees in relational databases, (nowadays I consider them to be the best way to store data in computers by far so far).

[2]  It’s common to say ‘trivial’ about programming problems, but I never had said so about tree structures. My biggest mistake ever in applications design was about tree structures. In an MRP application for the CMT Simba workshop (and later for Industrial Technique) I decided to hide away all technical complexity with ‘phantom structures’ from the user. This lead to some extremely complicated BOM update programs. The application lived for some 25 years and I was always terrified someone would call and say ‘Hey, we have a problem with the phantom structures, can you fix it?’ (This application also had a year 2K bug which I ignored ‘this application is scrapped long before year 2000, anyway I will not be here. Little did I know then, year 2000 I was back and among the first things I had to do was to fix the bug, the shop calendar for 2001 was off by one day.)    

[3]  Then why do SAP store three structures this way? SAP is an operational system and online updates of tree structures are frequent. This simple parent-child representation is simpler  to update than other tabular tree structures. Simpler but not simple, it is still very complex to maintain this tree structure.

[4]  The figures are not exact; there are a lot of processes running in our Data warehouse servers so timing for individual jobs varies.

[5]  This is a simplification, I have written about the SIKLAN impact on our Data Warehouse more in detail ’Why Gigabit matters’.

[6]  Reading Wikipedia, it looks like I have invented a PHP/XML variation . I’m confused about the Google patent, I’ve used map and reduce  for ages. I called it M assive Parallel Processing , it’s very natural when you parallel processing 哈哈 .

[7]  There is a lot more running in parallel together with these jobs, e.g. we transform the production BOMs in parallel along with the design BOMs.

Extracting data from SAP

Extracting data from SAP.

Communicating with SAP  is often thought of as cumbersome or almost impossible for mere mortals. This picture is false, SAP is very open and very simple to talk to . There are several well documented SAP interfaces that can be utilized for exchanging data with SAP. Here I’m going to describe how we use the SAP RFC api to communicate with SAP. RFC is a speedy program interface well suited for large data volumes. Normally you develop your own SAP RFC programs in the SAP ABAP  language. SAP comes with a large set of prebuilt RFC BAPI’s (Business Application Programming Interfaces) , but often you can make speedier downloads by tailor made RFC s. ( R ecently I have started to write some post s  how to use SQVI and mail to extract data from SAP.)
I  have created an ETL engine  that is built on top of an in-house developed job scheduler ADAP . (ADvanced Application Processor ).  Via ADAP’s SAP integration adaptor we import data from SAP. ADAP is built in PHP  and uses SAPRFC (a free software  package for PHP to SAP communication) .  
Update:
Google on SAPRFC by Eduard Koucky, this project looks dead to me, but Axel  Bangert seems to have picked up SAPRFC. Piers Harding have created a Unicode aware interface SAPNWRFC.  You should Google around and pick the right interface for you. (I’m slowly migrating from SAPRFC to SAPNWRFC. Examples in this post are using Koucky’s original  SAPRFC, which is the simpler interface to use.)

Example (Other examples can be found here ).

In this example I will use SAP BAPIs [1]  for extracting information about currencies. OK here we go. First we need to find appropriate BAPIs . For that we use SAP BAPI Explorer, log on to SAP and start transaction ‘BAPI’. Do an alphabetical search for ‘Currency’ and open it see Figur 1. Her we find getList which gives a list of currencies which is a good start.


Figur 1
Here you can find out all details about the Currency getList BAPI. But we only want to find out how to use this BAPI, so we click on the tools tab, then on ‘function builder’ and finally on ‘single test’ see Figur 2.

Figur 2
Pressing ‘single test’ takes us to the ‘Test function module: initial screen’. Now click the clock icon, this will execute the Currency getlist program and take us to the ‘Result Screen’ seeFigur 3.

Figur 3
Here we can see we got a table CURRENCY_LIST with 188 rows as a result. We can display the table by click on it. This takes us to the ‘structure editor’ which displays the result table CURRENCY_LIST see Figur 4.

Figur 4
Now we have run the Currency getList BAPI and we are familiar with the result. We have all information we need to set up an import job in ADAP. Jobs are organized in ADAP schedules. Here we see a schedule (Figur 5) that contains one job executing the currency getList from a remote system.

Figur 5
 And a brief log from running this schedule (Figur 6)

Figur 6
The <autoload> feature of ADAP loads the result into the ADAP data store. Here we can see a display of the first rows of the CURRENCY_LIST table in ADAP (Figur 7).

Figur 7
This example shows how simple it is to extract information from SAP, but this information is not particularly useful. Let’s expand our example to something more useful. We only extracted basic Currency information.  Now we are going to import exchange rate information. Searching the BAPI Explorer we find exchangeRate, open it we find GetListRateTypes (Figur 8), we ‘single test’ as we did

Figur 8
with currency. Based on this we create another job in ADAP job schedule bapiCurrency (Figure 9).

Figur 9
From the new ‘ratetypes’ job we get a table RATETYPES_LIST that looks like Figur 10

Figur 10
The interested reader is challenged to do the single testing in SAP BAPI explorer to verify the result table. Looking at the exchange rate types alone is not very useful, so we continue our pursuit on useful information. Looking at the BAPI explorer menu we see another Bapi  GetCurrentRates and now it gets exciting (to a certain degree), with exchange rates we can actually do currency exchange arithmetic. So first we do a single test on BAPI_EXCHRATE_GETCURRENTRATES Figur 11.

Figur 11
This BAPI needs some import parameters. Now we see the true beauty of the ‘Test Function Module’, it’s a very neat way of trying out programs in SAP. We type a date ‘20110109’ and a rate type ‘EURO’ from the table RATETYPES_LIST and give the BAPI a spin by clicking the clock icon.

Figur 12

Figur 12 shows the result, we got a result table ‘EXCH_RATE_LIST’ with twelve rows. We take this knowledge with us when we set up the ADAP job to extract current rates for rate types. With a ‘job iterator’ and ‘job references’ in ADAP it is easy to connect job in series and we will use these features to extract all current exchange rates for all currency rate types [2] . Here we have the job schedule with the two new jobs, ‘generate_iterator’ and ‘currencyRates’ Figur 13. Note how well the <import> maps to the import parameter specification in the SAP BAPI Explorer and how we feed the job currencyRates with data from the imported table RATETYPE_LIST.

Figur 13

The log (Figur 14) shows there were four successfully executed jobs, so we’re happy.
Figur 14

And the table EXCH_RATE_LIST looks like Figur 15
Figur 15

By exploring the standard SAP BAPIs and creating the XML script in Figur 13 we extract quite some information about currencies from our SAP system. Of course you need to know both SAP and ADAP in some detail to be able to do this. If that was not the case I (and probably you too) would not have a job [3] . The Figur 13 script is just a bare bone demo script, in production you probably need to add some prerequisites like waiting for some batch jobs in SAP to finish. And you would certainly use the extracted information e.g. create some fancy formatted Excel sheets and mail them to someone explaining our currency exchange rates. But that is another story. This was to show how easy it is to extract information from SAP. Not only are the communications channels easy to set up. SAP also has some great tools to assist you finding the information you like to extract. I have shown the BAPI Explorer. Tracing online sessions is another way to find information in SAP systems.

[1]  BAPIs are not limited to import, there are BAPIs for exporting data to SAP
[2]  This is just to show how easy it is to communicate with SAP, not to explain the details of ADAP.
[3]  . It is preposterous to think you can do anything without knowledge.

2012-03-25

Me and Computer Operations



After  college (‘gymnasium’ in the Swedish school system) and military services, I was admitted to the KTH (Royal Technical High school of Stockholm), I declined by mistake (I checked ‘no’ instead of ‘yes’, very much me). I got a job post process printouts from the computer of Atlas Copco. Ever since I been involved in computer operations one way or another.
 Computer operations is many things. One important thing is background jobs, actually at that time when I started, background jobs was all there was in the IBM 360 computer of Atlas Copco.  The GUI was printouts, computer terminals and ‘online computing’ came a little later with ergonomical advices about viewing angle, how to sit, how to prepare yourself before you logged into the system and never work more than two hours in front of the terminal etc. Job scheduling in IBM 360/390 was done by Job Control Language JCL, a horrible language that depended on Condition Codes for evaluating the outcome of executions. The logic behind the Condition Code was awkward in a way I never really understood, even thou I became  proficient in JCL. Unfortunately I lost close contact with IBM mainframes 1994, but I believe JCL is still around.
Early on we  built a job scheduling system on top of JCL to control job execution. Later I implemented a system created by a Californian software company (I have forgot the name).  These Californian guys were considered cool  because they wore sneakers to their business suites. I had created a job simulator to this system and at a German user meeting in Munich after a wet dinner I was persuaded to present my simulator the next day,  having a terrible stage fright  I didn’t agree but said ‘sure but only if I can do it in lederhosen and a Tyrolean hat’ and didn’t think more of it. The next morning having a bad hangover the Californians woke me up with a full Bavarian outfit. I did the presentation and to my astonishment it  was well received. Some months later a schoolmate of mine called me up and said ‘ I heard about a guy holding a presentation about job scheduling in lederhosen and I said to myself it can only be you’. I think I have worked with all major job scheduling systems for IBM mainframes (and some with Unix systems too).
 Job scheduling is basically submitting a chain of jobs for controlled execution. Good monitoring capabilities and automation is important. In the beginning of the 1980ties I help a German guy Florian to  create a system for engineers, a bridge system between the CAD system and the MRP system (which I had built). Florian was a very bright guy, I think he was a linguist or something, he had gone on a motorcycle vacation to Sweden, met a girl in Stockholm and never returned to Germany. He got himself a job at Atlas Copco helping the engineers to documenting new constructions. He found and read Mims reference manual. (Mims is the best software ever made for development of complex applications if you ask me.) Anyway Florian asked me to help him set up a system, and we created a system with fully automated background job  scheduling, sending mail notifications of the execution results!
Encouraged by the success of Florian’s system, I persuaded my boss to make the entire operation automatic, which we actually did (I left Atlas Copco before it was fully implemented.) We bought a Siemens robot  which we trained to load tape cassettes, there were some initial problems, the robot was muscular he had no problem pushing in both two and three cassettes into the tape drive at once, and he had bad eyesight randomly he picked the wrong tapes, it took a long time before we realized the robot needed more light to read the barcode on the cassettes. But Atlas Copco made the operations fully automated no operators supervised the mainframe. Then the operations was outsourced to Ericsson.  Later I came back to Atlas Copco and led the transition of the operation of my old systems from Ericsson to Tieto Enator. Now most of this is scrapped and migrated into SAP systems running on IBM computers in Belgium.  There is actually one system left which I have done some initial work on ‘the Funnel’, I also came up with that name.  (The original name was ‘tratten’ which is ‘the funnel’ in Swedish I did the translation.) It’s a program to program comms system, in operations for more than thirty five years now. Still all Atlas Copco’s Sales Orders are routed through the Funnel.  
This was  not what I had in mind for this post, the intension was to write about a job scheduling system I have written for our Business Intelligence application the Data Warehouse with the title ’The anatomy of a background job’. 

2012-03-24

I have lots of other interests beside computers, I just forgot what they are.

Why blog and what about? I really don’t know. But it seems most people do, so why not. Why English? I’m a swede living in Sweden and my native tongue is Swedish, my English is far from perfect. I have a feeling my writings here will be related to IT and English is the lingua franca of IT. Actually English is THE lingua franca of the world, so if you want to be read English it is. I have a colleague Petr Hutar who once said ‘I have lots of other interests beside computers, I just forgot what they are’ and that pretty much me. My profession is ‘IT-guy’. These days I mostly work with Business Intelligence and IT architectural stuff. BI is very interesting so I probably write about that. I’m sixty so I been around and I probably write about that too. Why ‘12dimensions’? BI is much about dimensions and twelve is on par or one more with the dimensions in the universe. I’m not smart enough to figure out how many dimensions the string theory prescribes, is it ten or eleven? Plus one time dimension which is very important in BI. Anyway if the universe is held in eleven dimensions, twelve must the enough for me, so I created ‘12dimensions’ for my BI work and I think it’s a catchy name, that’s why this blog is called 12dimensions. Why ‘larsxjohansson’? It’s my gmail account, my name is Lars Johansson (without an ‘x’). Lars Johansson is the most common  name among middle aged swedish men, so I’m happy to have the x variant. This is a test, I do not know if I will write posts here. I work all day and night so there is little time for other things, but I do some writing in my work that I intend to publish here, so we’ll see.  This is my way of learning what blogging is about. So welcome me into the blogging world J