Walk an extra mile in shoes like this one?

Last Friday I went down to Belgium on a business trip, to discuss the start of a SAP BW project. When I left I found my old Ecco City Walker shoes I hadn’t used for two years. I put them on and went to the airport there I realised  the rubber soles were sticky, they sort of glued to the ground and unfortunately left black marks where ever I went. Arriving at Brussels airport the soles  literally started to disintegrate and now I not only left black marks but sticky bits of rubber all over. The SAP BW project will run on a very tight schedule, and we most likely will have to walk an extra mile or two to finish on time. I will not be able to walk at all in shoes like this one. After the photo was taken I put the shoes into the garbage can :(


Replicating MySQL with Rsync - Part 2

In some previous posts  I have written about replicating Business Intelligence information to Local Area Networks satellites.
Now I had my replication scheme up and running for some weeks, and it surpasses all my expectations. The replication runs through like a clyster. In the last post I feared the network speed would be to slow, but it seems like the network guys have cranked up the speed.
This Rsync statistics is from last nights replication:

Number of files: 294
Number of files transferred: 140
Total file size: 726588950 bytes
Total transferred file size: 641270064 bytes
Literal data: 553799963 bytes
Matched data: 87470101 bytes
File list size: 3898
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 70309506
Total bytes received: 781434
sent 70309506 bytes  received 781434 bytes  437482.71 bytes/sec
total size is 726588950  speedup is 10.22
I’m not sure what all figures means but it’s fast. The replication is done twice plus the replication of an empty database (also done twice), normally this procedure takes less than three minutes wall clock time.
I run this replication against a live MySQL/ISAM. I just flush the tables and then rsync. I know there is no ongoing activities against this database during the replication. But that actually doesn’t matter that much, you can run a similar scheme against a busy database, if the activity is low run rsync until no data is transferred or run twice then lock the database and run a final rsync replication, but in that case I would prefer a proper backup scheme. I replicate this way because i know there in no activities against the database, and my rsync procedure is fast and simple.
But there is still a problem, a colleague run this procedure manually. And this is certainly not the way it should be. But I just haven’t had time to set up an automatic procedure yet. This procedure is a bit complicated, as it involves taking down the target database system and run the replicate from the source system and this should be controlled by a third server who knows when time is due for replication. Why not run the replication process from the target system? Yes, that is undeniably simpler, but I like to have the process under the supervision of the controlling server who knows when time is due.
This is how I will set it up. From the controlling server I will issue commands via ssh to the source and target systems:

Source system        Target system

1                                                Stop MySQL
2        Flush tables
3        Start Rsync
4                                                Check the database  -  MySqL upgrade script
5                                                Start the database.
I have promised my colleague to replace his manual labor with a shell script next week :)
If you have a simpler/better way of replicate please comment this post.


Web Services - Node.js and Flatiron a start.

Things do not always go according to plans. Yesterday I started to make plans for an Ubuntu private cloud, where I will migrate my Business Intelligence application The Data Warehouse . While thinking about the hardware I opened up my mailbox and found one mail that attracted my attention. It was a request for a Data Warehouse Web Service API. The sender a very experienced Lotus Notes consultant was obviously dissatisfied with the only two interfaces to the Data Warehouse ODBC and JDBC, ‘rigid and old fashioned’ he wrote. WTF my apps should be flexible and futuristic. The consultant was kind to specify what he meant by Web service and suggested me to study WSDL as a starting point.

I’m not an experienced Web programmer. I did what I usually do when I do not know a subject, I asked my friend Google and after a while I found myself study a Node.js tutorial. After some fiddling around with some simple node.js script I realised I can do a lot with this, actually web services is something I should have done a long time ago. Why not use PHP the web-language I already know? The simple reason is I do not know Node.js and it seems even more fit for creating Web Services than PHP. I decided to use a framework from the beginning, and I chosed Flatiron.

Now the problems started; I do not now web development, I do not know JavaScript, I do not know Node.js, I do not know the Flatiron framework. Embarrassingly I did not understand much when I started to create my first Flatiron project. The Flatiron documentation I find is meager. This morning I admitted to myself, I do not have the necessary skills to develop Flatiron/Node.js apps, so I bought the Kindle version of ‘The Node Beginners Book’ by Manuel Kiessling, if the book is what is says it will teach me the basis of JavaScript and Node.js.

Yesterday morning I started by planning for an Ubuntu Cloud and ended up utterly confused of the for me new concepts of Node.js. One thing I know though, if I manage to create Web Services  they will be simple, flexible and futuristic node.js apps floating around in an Ubuntu cloud. When? I do not now, time flies and the backlog at the office is just piling up.