Tuesday, October 25, 2011

Hadoop HDFS and HBase Configuration

Hadoop HDFS and HBase Configuration

We assume that you have theoretical knowledge of hadoop, hdfs, hbase and ZooKeeper. This document will provide the basic configuration for hdfs, hbase and ZooKeeper.

Software Requirements for hadoop/Hbase:
1. JavaTM 1.6.x
2. ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons.
3. Hadoop
4. HBase
5. Zoo Keeper

Machine Descriptions:
hbasemaster : Hbase Master
nameNode : Namenode

RS1: Region Server for Hbase
RS2: Region Server for Hbase
RS3: Region Server for Hbase

zk1: ZooKeeper Quorum
zk2: ZooKeeper Quorum
zk3: ZooKeeper Quorum

dt1: Data Nodes and Task Tracker
dt2: Data Nodes and Task Tracker
dt3: Data Nodes and Task Tracker
dt4: Data Nodes and Task Tracker

If we are searching for Job tracker, we will make it when we need to work on map reduce till then we do not need job tracker.

Hadoop Configuration
Unzip the hadoop folder in /home/hadoop/softwares i.e. /home/hadoop/softwares/hadoop-0.20.1/

In conf/hadoop-env.sh of hadoop-0.20.1,set JAVA_HOME: "/opt/jdk1.6.0_06",
Administrators can configure individual daemons using the configuration options HADOOP_*_OPTS. Various options available are shown below in the table.


NameNode HADOOP_NAMENODE_OPTS
DataNode HADOOP_DATANODE_OPTS
SecondaryNamenode HADOOP_SECONDARYNAMENODE_OPTS
JobTracker HADOOP_JOBTRACKER_OPTS
TaskTracker HADOOP_TASKTRACKER_OPTS



Folder structure: (/home/hadoop/hdfs)
For data node: /home/hadoop/hdfs/data
For name node: /home/hadoop/hdfs/name


Note that we should have a common user named "hadoop" under a group named "supergroup".

ssh Configuration in Hadoop cluster

Step 1: Generate key at server machine
ssh-keygen -t dsa

Respond to the prompt:
• give empty passphrase (return key)
• leave default filepath or give

Step 2: Import to authorized keys
cat file.pub (eg ~/.ssh/id_dsa.pub) >> ~/.ssh/authorized_keys

Step 3: Change mode file of the following folders
• ~/.ssh-->700
• ~/.ssh/* -->644

Step 4: Copy public key from server to all nodes
--ssh-copy-id -i source-filename user@remotehostname(or ip)
--give passphrase of the user here

Verification:
ssh destination host (or ip)
it should not ask for password

HDFS Configurations:

Hadoop configuration is driven by two types of important configuration files:
1. Read-only default configuration - src/core/core-default.xml, src/hdfs/hdfs-default.xml and src/mapred/mapred-default.xml.
2. Site-specific configuration - conf/core-site.xml, conf/hdfs-site.xml and conf/mapred-site.xml.

In ~/hadoop-0.20.1/conf, we need to make changes to core-site.xml and hdfs-site.xml configuration files for hadoop.

core-site.xml: We need to mention the ip address or domain name of Name node.
Parameter Value Notes
fs.default.name URI of NameNode. hdfs://namenode.XYZ.com:9001/

conf/hdfs-site.xml
Parameter Value
dfs.name.dir /home/hadoop/hdfs/name
dfs.data.dir /home/hadoop/hdfs/data


Slaves

List all slave hostnames or IP addresses in your conf/slaves file, one per line.

Starting Hadoop

To start a Hadoop cluster we will need to start both the HDFS and MapReduce

Format a new distributed filesystem:
$ bin/hadoop namenode -format

Start the HDFS with the following command, run on the designated NameNode:
$ bin/start-dfs.sh

The bin/start-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and starts the DataNode daemon on all the listed slaves.

Stop HDFS with the following command, run on the designated NameNode:
$ bin/stop-dfs.sh

The bin/stop-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and stops the DataNode daemon on all the listed slaves.

HBASE Configuration

Step 1 # Download HBase distribution from apache mirror. We are using Hbase 0.20.3
Step 2 # Extract the distribution to /home/hadoop/softwares/
Step 3 # Rename the folder name to hbase
Step 4 # Give all the permission (777) to the user to access hbase directory for user hadoop
Step 5 # Set the JAVA_HOME in hbase-site.xml file

hbase-site.conf

Parameter value Description
hbase.rootdir hdfs://hbaseMaster.XYZ.com:9001/hbase
hbase.master hbaseMaster. XYZ.com
hbase.cluster.distributed * true The mode cluster will be in.true : fully distributed with
unmanaged Zookeeper when false : standalone or pseudo distributed with managed zookeeper
hbase.zookeeper.quorum * zk1.XYZ.com,zk2.XYZ.com,xk3.XYZ.com Comma separated list of servers in the ZooKeeper Quorum.
This is the list of servers which we will start/stop ZooKeeper on.


Step 6 # either put hdfs-site.xml in hbase path or copy hdfs-site.xml from hadoop installation directory to hbase/conf directory

hdfs-site.conf
Parameter Value Description
dfs.data.dir /home/hadoop/hdfs/data
dfs.name.dir /home/hadoop/hdfs/name
dfs.namenode.logging.level all The logging level for dfs namenode. Other values are "dir"(trac
e namespace mutations), "block"(trace block under/over replications and block
creations/deletions), or "all"
dfs.datanode.socket.write.timeout 0
dfs.datanode.max.xcievers 2048
dfs.datanode.handler.count 10

step 7 # Set the regionServers list on regionservers file in hbase/conf
RS1.d2hs.com
RS2.d2hs.com
RS3.d2hs.com

step 8 #Give the permissions to hadoop user for hbase directory and hadoop directory (CHMOD 755)
step 9 # edit ~/.bashrc file and append ulimit -c 2048 to the end of the file being root user
step 10 # edit /etc/security/limits.conf to include the following two lines
hadoop soft nofile 32768
hadoop hard nofile 32768

step 11 # start the hdfs nameNode on the namenode master
bin/start-dfs.sh

step 12# start the HBase system from the hbase directory
bin/hbase-start-hbase.sh
Step #13 start hbase shell
bin/hbase shell


Hadoop commands

To view the details of HM directory

bin/hadoop dfs –ls /user/local/input/HM/
bin/hadoop dfs –cat /user/local/input/HM/files.txt


The right number of reduces seems to be 0.95 or 1.75 multiplied by ( * mapred.tasktracker.reduce.tasks.maximum) see apache mapreduce
The right level of parallelism for maps seems to be around 10-100 maps per-node

Setting replication factor for a directory in HDFS

hadoop dfs -setrep -w 3 -R /user/hadoop/dir1


see hadoop distributedCache for tutorial on Hadoop's Distributed Cache

Saturday, October 22, 2011

२०६८ कार्तिक ! एक रात्री कालिमाटीमा

अतितका तिता मिठा अनुभवहरु आज मेरो मानासपलटमा सागरको छालसरी कलाबाजी खेलिरहेको छ , सायद त्यसैले होला हातमा कलम समाई मध्यरातको यस चकमन्न समयमा मनका खुल्दुलिहरु बिसर्जीत गर्न यी औलाहरु तम्सिरहेका छन् । २६ बर्ष बितेको पटक्कै भान भएन यद्यपी बितेका समयको एक छ्यण पनि मलाई यसरि कागज कोर्न मन लागेको चेतना थिएन । आज एक्काशी लेख्न मन किन आतुर भईरहेको छ ? सायद बिगतको बेदनाले होकी आउदो भविस्यको उराठलाग्दो त्राशले कागजमै दुइ शब्द लेखेर भए पनि मनको बिरहलाई पोख्न आतुर छ यो मन ।

हेर्दा हेर्दै समयले फास्ट फार्वर्ड गरेको छ. गत महिना आइरन बिग्रेर सुरु भएको यो सिलसिला मेरो आइपड हुदै अहिले ल्याप्तोप सम्म पुगेको छ । समय व्यतित गर्न निकै गार्हो हुदो रहेछ, त्यहिपनि मनमा जे आउँछ त्यहि लेखेर समय व्यतित गरिरहेको छु । अहिले कागज र कलमले वर्ड र कम्प्युटरलाई बिर्सायेका छन् छ्यनिक भरको लागि भएपनि । मन ननै अध्यात्ममा लाग्छ ननै भौतिकवादतिर । मन बैराग्गियेर हावामा कसिंगर उडेझैँ अज्ञात गन्तव्यतिर रुमल्लियिरहेको छ । अब बल्ल बुझे जस्तो लाग्छ किन मन र मस्तिकछलाई देउतासंग तुलना गरिएको । निराश, निकम्मा, अकर्मन्य, अनाकर्सक यो सुस्तरी सास फेरिरहेको जीवन खोक्रो दम्ब देखाएर भएपनि आफ्नो गन्तब्य खोजि रहेको छ, भुलीरहेको छ, मानौ प्रसन्न मुद्रामा स्वयम् नारायण आउनेछन् अगाडी अनि दिने छन् मुफ्तमा तिन बरदान । हुनत नारायण पनि पावर भएकै देउता हुन् । त्रेता युगमा राम बनेर रावणलाई तह लगाए भने द्वापर युगमा कृष्ण बनेर कंशलाइ तर कलियुगमा यिनलाई हम्मे हम्मे परे जस्तो छ त्यसैले नेपालमा हाल दुइ वटा अवतार लिएका छन् । एकातिर शितल निवासमा उनै राम, उतातिर सिंघदर्बारमा त झन् बाबुराम र ब्याकअपमा पशुपतिनाथ ।

मनको खुल्दुली पानामा उतार्न ल्याप्तोपले त साथ् दिएननै भाग्यले पनि साथ नदेला जस्तो छ. एक पलमा मेरो कोठाको उज्यालो अध्यारोमा परिणत हुन पुगेको छ. निकै बेरपछि आभाष भयो यो त मेरो दिनचर्याको एक अभिन्न अंग बन्न पुगेको लोडसेदिंग पो रहेछ. कोठामा उज्ज्यालो हरायेजस्तै छ्यनिक रुपमा मेरो सुद्धिपनि हराएछ. अध्यारोमा छामछाम छुम छुम गर्दै मैनबत्ती खोज्न लागे, बल्लतल्ल भेटेपछि फेरी कागजमा देशको सरकारप्रतिको घृणा पोख्न पन लाग्यो see poisoned ethics तर आवेशलाई थामेर आफ्नै स्वप्नासंसारतिर लागे । सडकका कुकुरहरुको भुकाईको निरन्तर आवाज आईरहेको थियो मनमा लाग्यो यी कुकुरहरु मध्येरातको समयमा पनि हाम्रो टोलको सुरक्छ्याको निमित्त तम्तयार छन् । आफ्नो कर्तब्य प्रति कति धेरै निष्ठा यिनीहरुको । सायद हाम्रो देशको नेताहरु पनि कुकुर भईदिएको भए देश कति सुरक्छित हुन्थ्यो होला । फेरी प्रसङ्ग आफ्नै समस्यातिर मोडिएको छ । ग्यास सकिएको पनि दुइ हफ्ता भईसक्यो, कालिमाटीको फोहोर रेस्तुरेन्टमा दिगमिग मान्दै खाएको पनि हफ्तौ भईसक्यो,जहिले सोधेपनि पसलेले ग्यास छैन मात्र भन्छ । कोठाको बाल्टिन रित्तै छन्,आज रातीत पानि औछा कि र भरुम । कुनातिर मिनेरल वाटरका खाली बोतलहरु प्रसन्न मुद्रामा मलाई चिढयाईरहेका छन् ,मैला लुगाहरु काठमाडौँको फोहोरको राससरि थुप्रिएका छन् । यो हफ्ता त जसरिपनि दिदीको घरमा सबै लुगाहरु धुन लैजानेछु. त्यतिकैमा मोबाइलको रिंगको आवाज गुन्जियो । घरबाट फोन आएको रैछ । फोन उठाए उताबाट हेलो भनेको सुनियो त्यसपछि केहि आवाज नसुनिएको जस्तो नबुझिने आवाज आयो, कोठामा केहि बेर हेलो हेलो सब्दको आवाज सुनिरहयो मानौ म ठुला ठुला स्वरमा अंग्रेजी सब्दहरु उच्चारण गरिरहेको थिए । त्यस पछी फोन कट्यो । सरकारी नेटवर्कलाइ सराप्न मन लग्यो फेरीपनि मन संभाले आफैभित्र गुम्सायिराखे । एक्काशी कोठाको बिजुली आयो । सियफलको प्रकाश संगै मेरो मस्तिक्छ पनि एकोहोरो भएको भाव भयो. दायाँ बायाँ यत्रतत्र सर्वत्र समस्याको भुमरीमा छु जस्तो लाग्यो. कताबाट सुरु गरुम समस्या सुल्झ्याउँन अत्यन्तै गार्हो होला जस्तो छ बरु एकछिन को लागि भए पनि मस्तिक्छ खालि बनाउछु, सुत्छु बरु फेरी मध्येरातमा नेपाल खानेपानीको बोलावा आउँछ, एकछिनको लागि भए पनि सुकुन गन्छु, कलम बिसाँउछु।

Sunday, October 16, 2011

What I have done and what I have not !

I dont take the credit for publishing this post. I would have to admit that I copied the concept for this post from Sadhana . Its the attraction in the theme which I could not resist.

It is really difficult admitting something which might ruin your image and I am not quite sure I would be able to do it now but I promise to do it in future when I could gather enough guts within myself.

What I have done so far?

1. Flown in a Jet : Yes, to be exact 5 times [Update] Now as of Feb 7 2012 It is 9 times.


2. Adventure sports : Did Bungy jump from 160m cliff. Went to pokhara for Paragliding from sarangkot hill. My plan was canceled 15 minutes prior to scheduled time for paragliding due to sinister rain. Sky diving is on my list as well but the bottomline for sky diving is that I have to go abroad.


3.Been to continents ? Only Asia I regret. Europe and America is on my list. OK, no bias with continents I add Africa, Australia and Antarctica too on the list. Fingers crossed for my wish. [Update] 2012 January I stepped on Europe in Frankfort and have been living in North America [USA]. Now its the turn for Australia, Africa.


6.Been to different countries ? Nepal, India, Japan just three until now. There are a lot of countries in my cards of visit. I would love to go to Iceland, Sicily Italy, Paris, Galapagos, Vienna and USA. More to add to this list. Its 4 now [2012]. USA added to the list in Jan 11 2012


7.Swimming : Been to pool but I cannot swim. 5 feet depth is the ultimate place I would dare to be.


8.Eaten raw meat : Ya, From Raw fish, Raw squid, Raw flesh of whale, Raw octopus, Raw sea-urchin, Raw crab and finally now don't blame me, I was forced this time : Raw flesh of horse.


9.Had crush on a girl : hahaha, Yes I am not an objection :) But never had a girlfriend until now. Now don't ask for her name. The only hint I can give is that her name starts from A and ends on A.


10. Lived in freezing cold : Yes, In Hokkaido, Japan I lived in a city called Wakkanai near Russia, The northernmost city of Japan where temperature might fall to -30 degree celcius in winter.


11.Seen an Ocean : Yes pacific Ocean in Japan [2008] & pacific ocean from western USA In california


12.Traveled on roof of a bus : Yes, while returning back from Phulchoki hill after watching the snow probably when I was in 3rd year of my B.E. The bus was packed and we did not have any option but rather to travel on top of the bus. Dont regret it :)


13.Touched Snow : Snow was part of my life when I was studying in northern japan. 6 months of snowfall. It was enjoying initially but later on I was fed up throwing snow from my apartment. My door would be jammed by piles of overnight snowfall.


14.Taken alcoholic drinks : Yes :) The first time I tasted beer was when I was in 3rd year B.E. The initial taste was horrible. It was just a sip but enough for me to make a negative image of it. I learned to drink beer in Japan. Now I take it occasionally. I don't drink hard drinks though. Whiskey, Gin, Vodka, Scotch is still a taboo for me. Just beer and that too with friends only.


15. Seen Code monkeys : Yes, a lot of them. I can name them too :D I have also seen the remix version called Gorilla Monkey. My project mates know this :P


What I have not done so far?

1. Never Been to America : It would be a dream come true If i get a chance to go. Want to get an American degree. Its more than just an academic degree. Its a passion, passion to see the world, to be amongst the best in the world would be a matter of pride and honor to any person in this world. [Update] I am in America now :)


2. Never had any girlfriend : Nowadays its become a fashion to have at least one girlfriend. Not having a girlfriend is seen as an inability. I dont have one until now. Fingers crossed :D


3. Never been to a disco : Yet to visit a disco. The reason why i havent been to a discotheque might be because I cant dance. Its not that I dont like dance but I cant do it.


4. Never seen a big mountain close : Its a shame really. I am born in Nepal but am yet to witness the glory of our wonderful mountains. I truly wish to go to Annapurna circuit, Gosaikunda, manang, Mustang to witness lovely mountains which have been in my dreams for decades now.


5. Never danced : The reason is simple as that : I cannot dance. I love doing new stuffs, lets see If I could do it one day.


6. Never got the experience of Roller Coaster : No Roller Coaster in Nepal. [Update] Am thinking of going to a roller coaster in Vegas. Got the first ride in Revenge of the Mummy [Universal Studios 2012] 


7. Never smoked : May be twice, thrice. Just to know what is the magic in it that so many people take it. It was nothing special


And the list for have and have not goes on. The have list will certainly increase in future.

Friday, October 14, 2011

Could only be replicated to 0 nodes, instead of 1 Hadoop 0.20.2

could only be replicated to 0 nodes, instead of 1 is a common problem we encounter in hadoop time and often.

I faced this problem in my single node hadoop cluster. When I surfed in the internet for possible answer, I was taken to hadoop's website HowToSetupYourDevelopmentEnvironment. The hadoop wiki was making suggestions to erase all HDFS data and reformat it. The suggestion was far from being pragmatic that we have to reformat our cluster for any potential problem in HDFS. Imagine we have a production cluster and we have to reformat it.


Steps to Fix :

Issue df command to view the space available in each Linux mount we have and more specifically the mount hadoop is in.

[root@bishal hadoop-0.20.2]# df -kh


Filesystem Size Used Avail Use% Mounted on
/dev/hdg3 143G 135G 0M 100% /
/dev/hdg1 99M 11M 83M 12% /boot
none 2.0G 0 2.0G 0% /dev/shm



Upon issuing this comand I found that my filesystem had no disk space available. The inability of hadoop write in HDFS was due to insufficient disk space. Freed some disk space and the problem was solved.


Issue Linux command to empty Recycle bin

[root@bishal hadoop-0.20.2]# rm -rf ~/.Trash/*rm

Now recheck the disk space. Delete some unnecessary files in your FileSystem if upon emptying Recycle bin not enough space is freed. Enough disk space should be freed to let hadoop write files in HDFS. If the block size for hadoop is 128MB, make sure we have fread at least 128MB of disk space.


Now run your MapReduce job. The problem with should be fixed.

Wednesday, October 12, 2011

poisoned ethics and tainted professionalism : Politicians of Nepal

Its the day when it all started. The situation around me was provocative to some people while for others it was just another opportunity to remain in and around power. The struggle has begun, It is seen as a class struggle between those who were always at power and occupying the resources for long time and those who were always lagging behind in all aspects be it social, political or economical. There is always this difference which causes disturbance. In nature, when there is this difference in proportion at the rate of consuming and the rate of producing then there is natural disturbance and someone has to perish in order to maintain this order, in this case it might be the producer or the consumer.


Nepal is going through this phase, the suppression and dominance of certain clan and class has reached saturation level. Moreover, the increase in political awareness amongst the masses and ever contracting information gap has fueled revolt. This time it seems like the country is in a stage of fragmentation, people have started fighting amongst each other, everyone is representing some class and sector and they have almost forgot they are Nepalis first before being from any class and sector. Although its a natural process which has occurred in almost all part of the world and its just recent that Nepal is facing this problem.Certainly there has to be an end to this destruction. Someone has to win, either the revolutionaries and patriots or the corrupts who in the name of democracy and political freedom has sucked the blood from the fellow countrymen since ages. Its the last chance for the corrupts, to win this battle and will try to give whatever power they have to win.Now this has become the battle between the poor the suppressed and the suppressors. The UML and Congress are seen as the ones carrying old traditional thoughts and are reluctant to change whereas the Maoists who are portrayed as destructor's, the violators of law by them have brought radical political thoughts and ideas.Is it just that the Maoists want to change the economic and political system of Nepal or they have other motives as well? Although its a daunting task for them its not an impossible task. But, what we have seen of Maoists up until now, Its easier for the commoners to see them as no different than Nepali Congress or CPN UML. This time they have the support of the mass next time who knows. Nevertheless, we cannot undermine the class which do wants change and wants the state to recognize them after decades of oppression. There are people like Vijay Gachhadar, K.P Oli, Bidya Bhandari, Madhab Nepal, Bhim Rawal, Sushil koirala, Ramchandra Paudel, KhumBahadur Khadka, Govinda Raj Joshi, Bharat Mohan Adhikari,j.p Gupta, Rajendra mahato, Sarad singh Bhandari, Govinda Raj Joshi, Mahanta Thakur, Hridayesh Tripathi and most of the politicians who have been involved in crime and corruption of some sort in the past.Can we expect peace and progress with these politicians still intact as rulers ? But the fate of the country is such that these parasites keep on germinating and do not perish no matter how fragile is the condition of the nation. The country is entangled at the moment with these parasites, and they will leave no stone unturned to suck blood from the people. Its the time when people have to take power in their hands and should penalize the felons hiding in the curtain of democracy, but the big question lies here how ? people dont want yet another war. The justice has to attained. The verdict should be made but in peaceful democratic manner.


Certainly its not an easy task to eliminate these looters, but one day or another the day of justice will come and people wont forgive them then. They have to suffer each and every bit of pain they have given to the people. The history of mankind is the evidence that bad never wins no matter how much it tries to hide in the skin of truth and humanity. Madhab Nepal as such has created history in the history of Nepal as being the most corrupt and morally degraded figure. He seemed to be worst than the panches and mandales in the kings regime, Although the panches and mandales were tyrants and oppressors but they would never compromise their dignity in front of foreigners. But, Madhab Nepal seemed to be in a great mood in pleasing whosoever involved in making him the PM.


Apparently when I had completed my plus 2 and came to Kathmandu along with many other students throughout the country with dreams of being doctors and engineers. It was then when although having given heart out many of the students couldn't be doctors and get scholarships simply because along with us giving exam was academically weaker but being the daughter of Madhab Kumar Nepal who pleaded in front of India for giving scholarships to her daughter taking the right of many other brilliant students. He has been continually doing similar act of ethical and moral corruption since then be it the case when his under-qualified bureaucrat was sent to Hongkong who has been under lime light of corruption and complain since then. He has been successful in taking his wife to a senior post of Nepal Bank Limited and also was engaged in bringing his brother to singha durbar in his reign as PM of Nepal. His brother in law being an officer in CIAA is an evidence of him being a king of nepotism. When I was in Japan, He was on a trip to it as well. There many of the UML cadres left the party seeing his greediness. He had been to a shopping Mall where he had been continually pressurizing the cadres to buy him whichever materials he saw. Seeing a greedy person, who was then the chief of UML many people left the party. The country was yet to forget the betrayed Mahakali Treaty when it had to face the same person again. People of country has to bear in their mind that It is Madhab Nepal who said that he is not in favor of Constituent assembly when he was PM. It is not a big surprise to us because we had never sent him to make the constitution. The people now are capable of understanding who are real leaders and who are the masquerades. People from two areas slapped him in election but what a fate of the country? It still had to bear his moral corruption for more than an year. Relieved at last now when the PM is Baburam Bhattarai. During his tenure, People felt as if they were living in a country where there were no ethics, where there were no morality.Be it the case with his minister Karima Begam slapping a CDO or another minister Chanda Chaudhary breaking glass of the public vehicle. The worst part of these incidents were that Madhab Nepal never showed a sign of shame and repugnance. He was bold as ever. Evidently with such incidents, Nepal was declared most corrupt country in Asia. Adding insult to injury, he removed Ansari, state minister simply because he had won in CA elections, making the cabinet a mockery to democracy. Democracy is a system where elected people are to rule a nation. But, here the case was different. Madhab Nepal made a history again, in making a cabinet a nest of loosers in CA election. Never in the history of any country such a drama had been seen, when people were ruled by losers. Bidya Bahndari, shankar pokhrel, Madhab Nepal, Sujata Koirala all were losers in the election.. But to the good fate of them and the bad fate of the country, these people ruled out by the general public came to power once again. Simply disgusting. With communist in tag behind him, It was he who removed the pictures of Marx and Lenin when American ambassador Milinosky was on a visit to Balkhu. His period as a PM marks as the most unethical and corrupt In the history of our country.


Monetary corruption does not affects much as does the moral corruption. The morally corrupt is an extremist who is similar to an idol, it has no feeling, no love, no affection and it does not feels what is happening in and around him. I wont say that people like Madhab Nepal are most dangerous to this nation but I would certainly say that poisoned ethics and tainted professionalism as his is the most dangerous thing prevalent in political leaders in Nepal, because the disease they are carrying has no cure. And it has great probability of transferring to people around him.Its the fate of our nation we keep on producing people like him, our society should bear the responsibility in not being able to teach ethics to our leaders. The society should have been sterilized before it could have given birth to him. As his height, and mustache his heart is narrow too, In fact the narrowest in our country. Cursed by a widow, the country has to suffer its fate, and time and again people like him will be born in this land.

I might seem pretty harsh towards a person when there are so many like him in Nepal. The reason why I chose Madhab Kumar Nepal for the topic was simply because he suits best for the topic. Although there are many other Madhab Nepal alike politicians masquerading in the name of democracy, My intention here is to let fellow Nepali know of characters that resembles Madhab Nepal. The article represents my personal view and I might have to bear different tags when viewers read it. Please feel free to express your own opinion about politicians in Nepal.

Nepali Unicode Converter,Type in Nepali

Nepali Unicode Converter.Type in Nepali

Type the words in Roman characters below to get the Unicode Nepali equivalent of the same. The given application is a simple Nepali Unicode converter which returns Nepali Unicode characters for a given set of Roman characters. I will follow up with detailed usage instructions to type properly in Nepali hereafter.



नेपाली मा टाईप गर्नुश.Convert from Roman to Nepali(Press Ctrl+g to toggle between English and Nepali)


Thursday, September 29, 2011

Java class to test whether a file is in copying state or ready state

The following Java utility class could be used to test whether a file is in Copying state or it has been copied to file system. The class uses java scanner class. An infinite loop is used and upon each check the thread goes to sleep for 10 seconds. When the copying is completed the loop breaks.



/**
 * 
 */
package com.bishal.Test;

import java.io.File;
import java.io.FileNotFoundException;
import java.util.Scanner;

/**
 * @author bishal acharya
 * This class is used to test whether a file is in copy state
 *         or Ready state.
 */
public class CopyProgress {

    private boolean isFileReady(String filePath) {
        File file = new File(filePath);

        Scanner scanner;
        boolean isCopying = true;
        while (true) {
            try {
                scanner = new Scanner(file);
                isCopying = false;
            } catch (FileNotFoundException e) {
                System.out.println("File not found or is in copy State. ");
                sleepThread();
            }
            if (isCopying == false) {
                break;
            }
        }
        System.out.println("copy completed ::");
        return isCopying;
    }

    /**
     * sleep for 10 seconds
     */
    private static void sleepThread() {
        System.out.println("sleeping for 10 seconds");
        try {
            Thread.sleep(10000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
    }

    public static void main(String args[]) {
        CopyProgress cp = new CopyProgress();
        cp.isFileReady("C:\\Documents and Settings\\bacharya\\My Documents\\videos\\Se7en.avi");
    }
}





Check Output : The output for the project would be

File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
File is in copy State.
sleeping for 10 seconds
copy completed ::

Tuesday, September 27, 2011

Scale Up vs Scale out vs Scale in

Scalability Types


Scale Up vs Scale out vs Scale in


The general understanding of scalability is that If a system does not denigrates in performance when load or data size on the system increases then it is called scalable system. This definition is quite acceptable. But, what if our business requirement change? Will the system be still of the same scale? This question might trigger a need to define scalability in another way.
If an application or system is able to cope up well not only when many users turn up or data size increases in the system but also when the business requirements changes then the system is called scalable system. Now this definition to scalability not only addresses the change in load to the system but also addresses the business requirements change. There might be other dimensions to consider like administrative measures, functional measures and come up with a truly scalable system.



Scale Up systems:


Scale up systems is also known as vertically scalable systems. The ability of systems to improve its performance when the resources are scaled vertically is known as scale up systems. The resources could be memory, disk capacity, bandwidth etc. These systems traditionally run on top of single server and could possibly enhance its performance when scaled vertically. Examples of these systems are traditional databases, web servers and traditional applications windows applications. Many of scale up systems are single threaded applications but can be multi Threaded too. The computation is centralized.


Scale In Systems:


The ability of systems to run multiple threads upon multi core machines is known as scale in systems. For eg. 10 instances of Mysql running on top of 10 core machine is a scale-in system. The computation in these systems is de-centralized within different cores. Most of the scale-in systems are multi-threaded systems with the ability to run on top of several processors. Eg. Of scale in system is multiple instances of databaseserver running on multi-core server.


Scale Out systems:



Scale out system is also known as horizontally scalable systems. Both scale up and scale in systems face several discrepancies when the overall dataset that the system handles is quite large. Processing such a massive scale dataset could really be a daunting task for those systems. Scaling up means going out from few core single machine to several commodity machines. Each node contributes to scaling out for these systems. Addition of additional nodes increases the overall scalability of the system. These systems could be single or multi threaded and the computation is de-centralized. Massive level of parallelism could be leveraged for distributed processing of huge datasets. Google Map Reduce has been the technology buzz as far as scale-out systems are concerned. Products like Hadoop, GreenPlum, Vertica and TeraData are scale out systems with all of them having support for Map-Reduce style of distributed job processing. Hadoop for example could scale out to several thousand commodity machines. The most noticeable things about these architectures is that all of them are shared nothing architectures. In shared nothing architecture every node is independent to any other node in the cluster and can perform its tasks independently. These systems could be single or multi threaded and the computation is always de-centralized.

Sunday, September 25, 2011

Hike to SarangKot and RSR 2011 August 12

This was my first hike coordination and I was feeling a sense of responsibility as well as excitement towards the plan. This hike meant more to me than just go to some place and have fun. I was to make my first maiden hike to sarangkot, Pokhara which was something I had kept in my minds for quite a long time and besides this being my first trip there, it was meant to make me fly. We were about 10 people going and the top at the plan was to glide in a parachute. All fingers were crossed but we were still at the mercy of the weather of pokhara. Gliding in the sky was by far the most exciting aspect of our hike. To view the wonderful valley from the sky was something we would really dream of. But it was going to happen this time.

The trip started late as expected. The culprit were babinz and Pratik: the last hikers to arrive at D2 premises. We passed our time playing ping-pong and carem. Finally, they arrived. Thanks to them for making our trip, really I appreciate them coming. Better late than never. So, here we are at the premises of our workplace. This is when we managed to take our first snap. Missed other guys. May be Rajani, she took this picture. But nevermind we had officially started our travel with a sense of excitement and expectations of what was about to come.


First snap at D2 Premises





Here we are at Kalimati, Now wait on for a second. We were meant to go to Sarangkot. So why are we at Kalimati ? Does not makes sense. Does it ? ya, we stopped at kalimati, kathmandu at a restaurent to have our breakfast. Nothing special. Sandwiches, omlete and Tea was what we ate as far as I remember. Because our budget allocated by our office for the hike was limited, we had to make sure that we paid our personal expenses if there were an additional cost above our estimation.



BreakFast at kalimati






Its quiz Time. Babins looks happy. Now dont blame me. My grammar is correct its his name which is plural :)


Memory card full of songs was there but something different was needed to make the trip even more eventful. So, I had decided the earlier night that I would make some questions and let the guys play a quiz while we were in the VAN. Bikesh dai was selected as the quiz master and things went rather well. All of the hikers got their share of shots on giving a correct answer.






Lunch @ River side Resort (me, ashish and Shamesh dai) shamesh dai looks happy


Our plan made a twist when we suddenly decided to take a lunch at River side resort in Kurintar. I had not been to RSR before so was at mercy of other guys who had gone there earlier to decide whether to have a lunch there or somewhere else.





Babins wants more than just a lunch. may be swimming :)



For those people who haven't been to RSR : guess what is this ?





Swimming pool water was awesomely cool

Now this was one of the best part of our travel. Swimming at RSR. The water was extremely cool. Even the guys like me and Ashish who could not swim decided to jump into the pool to join the fun. It was worth of an experience. Some guys even jumped in the jumper including me at the same location. That was fun too.






Devis Fall - Dipesh with his SLR and Bikesh dai smiling as always



Here we are at Pokhara. We arrived at saturday's evening and booked a lodge for us and went to WoodRock for some live music. It was fun. Gunjan was the pick of dancers at woodrock. He had a lot of fun including other guys. The other morning after the hangover, We decided to pay a visit to Devis fall. The waterfall although looked small in width had extremely strong current. We had our breakfast at devis fall and Lunch at local Thakali. We went to phewa lake for 1 hour boating. It was fun. But above all, We were waiting for the magical time to arrive. The paragliding was scheduled at 11 o'clock. But, unfortunately it was shifted to 2 o'clock. God knows why ? we were waiting at the agents office when suddenly at 1:45 black clouds appeared out of nowhere. Our plan was completely ruined by the weather. With heavy heart and drooped face we decided to return back to Kathmandu but with a spirit that we would come back again.











Time to be sailors at phewa lake











Begnas Lake





True Hikers. Tired, exhausted but onto their destination





All guys happy





Me happy at the pool







Rajani taking a snap







Night at Pokhara Pipal Bot







Night Time party at WoodRock






After two days of excitement, drama, fun. we returned back to kathmandu on sunday evening prabably at 8 o clock. On the way back, we took some snacks : fried fish and shrimp at Mugling. Well Ram dai gave all of us nightmares when he drove from pokhara to kathmandu. Certainly he is a cooler driver than that. Now this trip was a real good one if I forget the last time cancellation of our paragliding plan.

Tuesday, September 13, 2011

Generating pdf reports in JAVA using iText library

The given Java program demonstrates a tutorial based upon iText to produce pdf report in the form of a Certificate. The final generated certificate will be a pdf file and look like the figure given. Three java classes are used for the sample as listed below. Cetrificate class represents the basic certificate object to be generated encapsulating all the required contents wanted on the certificate pdf report. CerrtificateReport is the main Java class which when run produces pdf certificate report under ./templates/ directory. The java report tutorial also uses XmlUtils file which is a xml utility class for reading xml files. The class has a method which returns the list of entries in the xml file required by our application.

Classes Used in the pdf Report generating application.

1. Cerrificate.java
2. CertificateReport.java
3. XmlUtils.java


Libraries Used:
iText1-3.jar

Certificate.java

package org.dvst.dvs;
import java.util.Date;
import java.util.List;
/**
* DTO for producing certificate
*  @author bishal acharya
*/
public class Certificate {
private int registrationNo;
private String fullName;
private String fatherName;
private String permanentAddres;
private int citizenshipNo;
private int passportNo;
private String creditHours;
private String system;
private Date fromDate;
private Date toDate;
private String course;
private String imagePath;
private List<String> headers;
private String workSystem;

public int getRegistrationNo() {
return registrationNo;
}
public void setRegistrationNo(int registrationNo) {
this.registrationNo = registrationNo;
}
public String getFullName() {
return fullName;
}
public void setFullName(String fullName) {
this.fullName = fullName;
}
public String getFatherName() {
return fatherName;
}
public void setFatherName(String fatherName) {
this.fatherName = fatherName;
}
public String getPermanentAddres() {
return permanentAddres;
}
public void setPermanentAddres(String permanentAddres) {
this.permanentAddres = permanentAddres;
}
public int getPassportNo() {
return passportNo;
}
public void setPassportNo(int passportNo) {
this.passportNo = passportNo;
}
public int getCitizenshipNo() {
return citizenshipNo;
}
public void setCitizenshipNo(int citizenshipNo) {
this.citizenshipNo = citizenshipNo;
}
public String getCreditHours() {
return creditHours;
}
public void setCreditHours(String creditHours) {
this.creditHours = creditHours;
}
public String getSystem() {
return system;
}
public void setSystem(String system) {
this.system = system;
}
public Date getFromDate() {
return fromDate;
}
public void setFromDate(Date fromDate) {
this.fromDate = fromDate;
}
public Date getToDate() {
return toDate;
}
public void setToDate(Date toDate) {
this.toDate = toDate;
}
public String getCourse() {
return course;
}
public void setCourse(String course) {
this.course = course;
}

public String getImagePath() {
return imagePath;
}
public void setImagePath(String imagePath) {
this.imagePath = imagePath;
}
public List<String> getHeaders() {
return headers;
}
public void setHeaders(List<String> headers) {
this.headers = headers;
}
public String getWorkSystem() {
return workSystem;
}
public void setWorkSystem(String workSystem) {
this.workSystem = workSystem;
}
}



CertificateReport.java

package org.dvst.dvs;
import java.io.FileOutputStream;
import java.text.SimpleDateFormat;
import java.util.Date;
import java.util.List;
import com.lowagie.text.Chunk;
import com.lowagie.text.Document;
import com.lowagie.text.Font;
import com.lowagie.text.PageSize;
import com.lowagie.text.Paragraph;
import com.lowagie.text.pdf.PdfWriter;

/**
* @author bishal acharya
*/
public class CertificateReport {
public CertificateReport(Certificate certificate) throws Exception {
SimpleDateFormat formatter = new SimpleDateFormat("EEE, d MMM yyyy");
Document document = new Document(PageSize.A4.rotate());
document.setPageSize(PageSize.A4);
PdfWriter.getInstance(document,
new FileOutputStream("./templates/" + certificate.getFullName()
+ "-" + certificate.getRegistrationNo() + ".pdf"));
document.open();
Paragraph p1 = new Paragraph(30);
p1.add(new Chunk(certificate.getHeaders().get(0), new Font(
Font.TIMES_ROMAN, 8)));
p1.setAlignment(1);
Paragraph p2 = new Paragraph();
p2.add(new Chunk(certificate.getHeaders().get(1), new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p2.setAlignment(1);
Paragraph p3 = new Paragraph();
p3.add(new Chunk(certificate.getHeaders().get(2), new Font(
Font.TIMES_ROMAN, 14, Font.BOLD)));
p3.setAlignment(1);
Paragraph p4 = new Paragraph();
p4.add(new Chunk(certificate.getHeaders().get(3), new Font(
Font.TIMES_ROMAN, 14)));
p4.setAlignment(1);
Paragraph p5 = new Paragraph(60);
p5.add(new Chunk("Reg. No. (DVSDT) :- "
+ certificate.getRegistrationNo(),
new Font(Font.TIMES_ROMAN, 9)));
p5.setAlignment(0);
p5.setIndentationLeft(80);
Paragraph p6 = new Paragraph(45);
p6.add(new Chunk("Certificate", new Font(Font.TIMES_ROMAN, 17,
Font.BOLD)));
p6.setAlignment(1);
Paragraph p7 = new Paragraph(30);
p7.add(new Chunk("This certificate is awarded to " + "  ", new Font(
Font.TIMES_ROMAN, 9)));
p7.add(new Chunk(certificate.getFullName() + "  ", new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p7.add(new Chunk("son of  ", new Font(Font.TIMES_ROMAN, 9)));
p7.add(new Chunk("Mr.  " + certificate.getFatherName(), new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p7.setAlignment(1);
Paragraph p8 = new Paragraph(18);
p8.add(new Chunk("Permanent resident of" + "  ", new Font(
Font.TIMES_ROMAN, 9)));

p8.add(new Chunk(certificate.getPermanentAddres() + "  ", new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p8.add(new Chunk("holding citizenship No : ", new Font(
Font.TIMES_ROMAN, 9)));
p8.add(new Chunk(certificate.getCitizenshipNo() + " ", new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p8.setAlignment(1);
Paragraph p9 = new Paragraph(18);
p9.add(new Chunk("& passport No. ", new Font(Font.TIMES_ROMAN, 9)));
p9.add(new Chunk(certificate.getPassportNo() + " ", new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p9.add(new Chunk("for successful completion of "
+ certificate.getCreditHours()
+ " Credit Hours course on Preliminary", new Font(
Font.TIMES_ROMAN, 9)));
p9.setAlignment(1);
Paragraph p10 = new Paragraph(18);
p10.add(new Chunk(
"education for the workers going to Republic of Korea under",
new Font(Font.TIMES_ROMAN, 9)));
p10.add(new Chunk("  " + certificate.getWorkSystem(), new Font(
Font.TIMES_ROMAN, 9, Font.BOLD)));
p10.setAlignment(1);

Paragraph p11 = new Paragraph(18);
p11.add(new Chunk("from  "
+ formatter.format(certificate.getFromDate()) + "  " + "to  "
+ formatter.format(certificate.getToDate()), new Font(
Font.TIMES_ROMAN, 9)));
p11.setAlignment(1);

Paragraph p12 = new Paragraph(45);
p12.add(new Chunk(
"---------------------"
+ "                                                                "
+ "                                                          "
+ "  ---------------------------", new Font(
Font.TIMES_ROMAN, 8)));

p12.setAlignment(1);
Paragraph p13 = new Paragraph(10);
p13.add(new Chunk(
"   Coordinator"
+ "                                                           "
+ "                                                                        "
+ "Executive Director", new Font(Font.TIMES_ROMAN, 8)));
p13.setAlignment(1);
Paragraph p14 = new Paragraph(20);
p14.setAlignment(1);
p14.add(new Chunk(formatter.format(new Date()), new Font(
Font.TIMES_ROMAN, 7, Font.BOLD)));

document.add(p1);
document.add(p2);
document.add(p3);
document.add(p4);
document.add(p5);
document.add(p6);
document.add(p7);
document.add(p8);
document.add(p9);
document.add(p10);
document.add(p11);
document.add(p12);
document.add(p13);
document.add(p14);
com.lowagie.text.Image image = com.lowagie.text.Image
.getInstance("./templates/me.JPG");
image.setBorder(1);
image.scaleAbsolute(100, 100);
image.setAbsolutePosition(450, 730);
document.add(image);
document.close();

}

public static void main(String[] args) {
List<String> headerList = XmlUtils.getNodeValue("DataElement", "Value",
DvsdtConstants.xmlConfigurationpath);
try {
Certificate c = new Certificate();
c.setFullName("Bishal Acharya");
c.setFatherName("Manoj Acharya");
c.setRegistrationNo(15236);
c.setCitizenshipNo(102545);
c.setPassportNo(3518161);
c.setFromDate(new Date());
c.setToDate(new Date());
c.setCreditHours("Fourty Five (45)");
c.setPermanentAddres("Shahid Marga Biratnagar");
c.setWorkSystem("Employment Permit System");
c.setHeaders(headerList);
CertificateReport cR = new CertificateReport(c);

} catch (Exception e) {
System.out.println(e);
}
}
}


XmlUtils.java

package org.dvst.dvs;

import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import javax.xml.parsers.DocumentBuilder;
import javax.xml.parsers.DocumentBuilderFactory;
import javax.xml.parsers.ParserConfigurationException;

import org.w3c.dom.Document;
import org.w3c.dom.Element;
import org.w3c.dom.Node;
import org.w3c.dom.NodeList;
import org.xml.sax.SAXException;

/**
* Returns node value in given XML
* @author bishal acharya
*/
public class XmlUtils {
/**
* Gets Node Value List from given XML document with file Path
* 
* @param parentTag
* @param tagName
* @param layoutFile
* @return
*/
public static List<String> getNodeValue(String parentTag, String tagName,
String layoutFile) {
DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory
.newInstance();
DocumentBuilder docBuilder = null;
Document doc = null;
List<String> valueList = new ArrayList<String>();
try {
docBuilder = docBuilderFactory.newDocumentBuilder();
doc = docBuilder.parse(new File(layoutFile));
} catch (SAXException e) {
e.printStackTrace();
} catch (ParserConfigurationException e) {
e.printStackTrace();
} catch (IOException e) {
System.out.println("Could not load file");
}
NodeList layoutList = doc.getElementsByTagName(parentTag);

for (int s = 0; s < layoutList.getLength(); s++) {
Node firstPersonNode = layoutList.item(s);
if (firstPersonNode.getNodeType() == Node.ELEMENT_NODE) {
Element firstPersonElement = (Element) firstPersonNode;
NodeList firstNameList = (firstPersonElement)
.getElementsByTagName(tagName);
Element firstNameElement = (Element) firstNameList.item(0);
NodeList textFNList = (firstNameElement).getChildNodes();
valueList.add(textFNList.item(0).getNodeValue().trim());
}
}
return valueList;
}

/**
* Unit test for XmlUtils class
* @param args
*/
public static void main(String args[]) {
System.out.println(XmlUtils.getNodeValue("DataElement", "Value",
"./templates/Contents.xml"));
}
}



The xml file should be present under ./templates/Contents.xml






me.JPG image file should be present under the same ./templates/me.JPG directory.

Commons Validator : Java based Validation Utility

Commons Validator : Java based Validation Utility

A utility to validate common data fields like Numeric, Double, Date, email, Integer, Currency,Percent etc using apache Commons validator library. The class below uses Commons-Validator-1.3.1.jar library. The bulk of logic is implemented in the library itself. This class is a simple java program to unite validators work with a simple utility.


/**
* Utility for validation of different fields
*
* @author Bishal Acharya
*/
public class ValidationUtility {
/**
* Check if provided String is Number
*
* @param str
* @return
*/
public static boolean checkIsNumeric(String str) {
if (str == null)
return false;
return str.matches("-?\\d+(.\\d+)?");
}

/**
* Check if given String provided is double
*
* @param str
* @return
*/
public static boolean checkIfDouble(String str) {
if (str == null)
return false;
try {
Double.parseDouble(str);
} catch (NumberFormatException nfe) {
return false;
}
return true;
}

/**
* Validates whether provided string is date field or not
*
* @param date
* @param format
* defaultDate format
* @return
*/
public static boolean validateDate(String date) {
String format = "MM/dd/yyyy";
DateValidator validator = DateValidator.getInstance();

Date dateVal = validator.validate(date, format);
if (dateVal == null) {
return false;
}
return true;
}

/**
* Validates whether provided string is date field or not
*
* @param date
* @param format
* @return boolean status of whether given data is valid or not
*/
public static boolean validateDate(String date, String format) {

DateValidator validator = DateValidator.getInstance();

Date dateVal = validator.validate(date, format);
if (dateVal == null) {
return false;
}
return true;
}

/**
* Formats the given date as according to given formatter
*
* @param date
* @param format
* @return
*/
public static String formatDate(String date, String format) {
DateValidator validator = DateValidator.getInstance();

String dateVal = null;
try {
dateVal = validator.format(date, format);
} catch (IllegalArgumentException e) {
System.out.println("Bad date:" + date + ": cannot be formatted");
}
if (dateVal == null) {
return null;
}
return dateVal;
}

/**
* Validates whether clients data is Integer or not
*
* @param integer
* @return
*/
public static boolean IntegerValidator(String integer) {
IntegerValidator validator = IntegerValidator.getInstance();

Integer integerVal = validator.validate(integer, "#,##0.00");
if (integerVal == null) {
return false;
}
return true;
}

/**
* validates whether data is currency of not
*
* @param currency
* @param loc
* @return
*/
public static boolean currencyValidator(String currency, Locale loc) {
BigDecimalValidator validator = CurrencyValidator.getInstance();
if (loc == null) {
loc = Locale.US;
}
BigDecimal amount = validator.validate(currency, loc);
if (amount == null) {
return false;
}
return true;
}

/**
* Validates whether data provided is in percentage or not
*
* @param percentVal
* @return
*/
public static boolean percentValidator(String percentVal) {
BigDecimalValidator validator = PercentValidator.getInstance();
boolean valid = false;
BigDecimal Percent = validator.validate(percentVal, Locale.US);
if (Percent == null) {
valid = false;
}
// Check the percent is between 0% and 100%
if (validator.isInRange(Percent, 0, 1)) {
valid = true;
} else {
valid = false;
}
return valid;
}

/**
* validates correct email address
*
* @param email
* @return
*/
public static boolean emailValidator(String email) {
EmailValidator validator = EmailValidator.getInstance();
boolean isAddressValid = validator.isValid(email);
return isAddressValid;
}

public static void main(String args[]) {
String s = "12/18/1952";
System.out.println(DateValidator.getInstance().validate(s,
"MM/dd/yyyy"));
System.out.println("valid percent :"
+ ValidationUtility.percentValidator("100"));
System.out.println("Invalid percent :"
+ ValidationUtility.percentValidator("110"));

System.out.println("Valid Currency :"
+ ValidationUtility.currencyValidator("100", Locale.US));
System.out.println("InValid Currency :"
+ ValidationUtility.currencyValidator("Dollar", Locale.US));
System.out.println("Integer Validator :"
+ ValidationUtility.IntegerValidator("1"));
System.out.println("Integer Validator :"
+ ValidationUtility.IntegerValidator("1.2"));
System.out.println("Valid Numeric:"
+ ValidationUtility.checkIsNumeric("1"));
System.out.println("InValid Numeric:"
+ ValidationUtility.checkIsNumeric("ABCD")); }

}




Output :-

Thu Dec 18 00:00:00 NPT 1952
valid percent :true
Invalid percent :false
Valid Currency :true
InValid Currency :false
Integer Validator :true
Integer Validator :false
Valid Numeric:true
InValid Numeric:false

Utility for Evaluating XML string,Files in JAVA

Given below is a Java utility class that can be used to evaluate/parse XML files. There are three different methods which parses the given XML file or XML String.

getNodeValueListFromFile

This method can be used to get NodeValue List from the ML file given as String.

getNodeValue

This method can be used to get value of the Node from give XML file.

printNodeValueFromFile

This method prints the Node value from XML file given as string

/**
* @author bacharya
*/
public class XmlUtils {
/**
* Gets Node Value from given XML document with file Path
*
* @param parentTag
* @param tagName
* @param layoutFile
* @return
*/
public static String getNodeValue(String parentTag, String tagName,
String layoutFile) {
DocumentBuilderFactory docBuilderFactory = DocumentBuilderFactory
.newInstance();
DocumentBuilder docBuilder = null;
Document doc = null;

try {
docBuilder = docBuilderFactory.newDocumentBuilder();
doc = docBuilder.parse(new File(layoutFile));
} catch (SAXException e) {
e.printStackTrace();
} catch (ParserConfigurationException e) {
e.printStackTrace();
} catch (IOException e) {
System.out.println("Could not load file");
}
NodeList layoutList = doc.getElementsByTagName(parentTag);

for (int s = 0; s < layoutList.getLength(); s++) {
Node firstPersonNode = layoutList.item(s);
if (firstPersonNode.getNodeType() == Node.ELEMENT_NODE) {
Element firstPersonElement = (Element) firstPersonNode;
NodeList firstNameList = (firstPersonElement)
.getElementsByTagName(tagName);
Element firstNameElement = (Element) firstNameList.item(0);
NodeList textFNList = (firstNameElement).getChildNodes();
return textFNList.item(0).getNodeValue().trim();
}
}
return null;
}

/**
* Gets Node Value from given XML as String
*
* @param parentTag
* @param tagName
* @param xmlRecords
* @return
*/
public static void printNodeValueFromFile(String parentTag, String tagName,
String xmlRecords) {
try {
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
InputSource is = new InputSource();
is.setCharacterStream(new StringReader(xmlRecords));

Document doc = db.parse(is);
NodeList nodes = doc.getElementsByTagName(parentTag);

for (int i = 0; i < nodes.getLength(); i++) {
Element element = (Element) nodes.item(i);

NodeList name = element.getElementsByTagName(tagName);
Element line = (Element) name.item(0);
System.out.println(": " + getCharacterDataFromElement(line));
}
} catch (Exception e) {
e.printStackTrace();
}
return "";

}

/**
* Gets Node Value from given XML as Map of List of Strings
*
* @param parentTag
* @param tagName
* @param xmlRecords
* @return
*/
public static Map<String, List<String>> getNodeValueListFromFile(
String parentTag, String tagName, String xmlRecords) {
Map<String, List<String>> nodeMapList = new HashMap<String, List<String>>();
List<String> valueList = new ArrayList<String>();

try {
DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance();
DocumentBuilder db = dbf.newDocumentBuilder();
InputSource is = new InputSource();
is.setCharacterStream(new StringReader(xmlRecords));

Document doc = db.parse(is);
NodeList nodes = doc.getElementsByTagName(parentTag);

for (int i = 0; i < nodes.getLength(); i++) {
Element element = (Element) nodes.item(i);

NodeList name = element.getElementsByTagName(tagName);
Element line = (Element) name.item(0);
valueList.add(getCharacterDataFromElement(line));
}
} catch (Exception e) {
e.printStackTrace();
}
nodeMapList.put(tagName, valueList);
return nodeMapList;

}

public static String getCharacterDataFromElement(Element e) {
Node child = e.getFirstChild();
if (child instanceof CharacterData) {
CharacterData cd = (CharacterData) child;
return cd.getData();
}
return "?";
}

/**
* Unit test for XmlUtils class
*
* @param args
*/
public static void main(String args[]) {
String rec = "<Layout>" + " <Data>" + " <Name>testLayout</Name>"
+ " <Delimiter>s</Delimiter>" + " </Data>" + " <Details>"
+ " <DataElement>" + " <fieldName>GROUP</fieldName>"
+ " <Type>String</Type>" + " <Location>" + " <Num>1</Num>"
+ " </Location>" + " </DataElement>" + " <DataElement>"
+ " <fieldName>ENTITY_CODE</fieldName>"
+ " <Type>String</Type>" + " <Location>" + " <Num>2</Num>"
+ " </Location>" + " </DataElement>" + " </Details>"
+ "</Layout>";

Map<String, List<String>> nodeMapList = XmlUtils
.getNodeValueListFromFile("DataElement", "fieldName", rec);
Map<String, List<String>> nodeMapList1 = XmlUtils
.getNodeValueListFromFile("DataElement", "Type", rec);
Map<String, List<String>> nodeMapList2 = XmlUtils
.getNodeValueListFromFile("Location", "Num", rec);

List<String> val = nodeMapList.get("fieldName");

System.out.println(val.size());
for (int i = 0; i < val.size(); i++) {
System.out.println(nodeMapList1.get("Type").get(i));
System.out.println(nodeMapList2.get("Num").get(i));
}

System.out.println(
XmlUtils.getNodeValueFromFile("DataElement", "fieldName",
rec));

}
}



Test Output :

2
String
1
String
2
: GROUP
: ENTITY_CODE

Thursday, September 08, 2011

Using Distributed Cache in Hadoop (Hadoop 0.20.2)

Quite often we encounter situation when we need certain files like configuration files, jar libraries, xml files, properties files etc to be present in Hadoops processing nodes at the time of its execution. Quite understandably Hadoop has a feature called Distributed Cache which helps in sending those readonly files to the task nodes. In Hadoop environment jobs are basically map-Reduce jobs and the necessary readonly files are copied to the tasktracker nodes at the beginning of job execution process. The default size of distributed cache in Hadoop is about 10 GB but We can control the size of the distributed cache by explicitly defining its size in hadoop’s configuration file local.cache.size.


Thus, Distributed cache is a mechanism to caching readonly data over Hadoop cluster. The sending of readOnly files occurs at the time of job creation and the framework makes the cached files available to the cluster nodes at their computational time.


The following distributed cache java program sends the necessary xml files to the task executing nodes prior to job execution.


Java Program/Tutorial of Distributed Cache usage in Hadoop
Hadoop Version : 0.20.2
Java Version: Java-SE-1.6



The program below consists of two classes. The DcacheMapper class and the parent Class. Job is initialized in the base class. Job is initialized pointing to the location in HDFS where the file to be sent to all nodes is present. When the setup method in parent class is executed we can retrieve the distributed configuration file and read it for our usage.

API doc for Distributed Cache can be found at the given URL.

http://hadoop.apache.org/common/docs/r0.20.2/api/org/apache/hadoop/filecache/DistributedCache.html


Class To make a Map Reduce Job for Distributed Cache


package com.bishal.mapreduce;

import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapred.lib.MultipleTextOutputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

/**
* @author Bishal Acharya
*
*/
public class DcacheMapper extends ParentMapper {
public DcacheMapper() {
super();
}

public void map(Object key, Text value, Context context) throws IOException {
/**
* Write your map Implementation
*/
};


public static void main(String args[]) throws URISyntaxException,
IOException, InterruptedException, ClassNotFoundException {

Configuration conf = new Configuration();

final String NAME_NODE = "hdfs://localhost:9000";

Job job = new Job(conf);

DistributedCache.addCacheFile(new URI(NAME_NODE
+ "/user/root/input/Configuration/layout.xml"),
job.getConfiguration());


job.setMapperClass(ImportMapReduce.class);
job.setJarByClass(ImportMapReduce.class);
job.setNumReduceTasks(0);

FileInputFormat.addInputPath(
job,
new Path(NAME_NODE + "/user/root/input/" + "/"
+ "/test.txt"));
FileOutputFormat.setOutputPath(
job,
new Path(NAME_NODE + "/user/root/output/" + "/"
+ "/importOutput"));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}




Parent Class to read data from Distributed Cache


package com.bishal.mapreduce;

import java.io.BufferedReader;
import java.io.FileReader;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;

/**
* BaseMapper class to perform initialization setup and cleanUp tasks using Distributed Cache for Map
* Reduce job
*
* @author Bishal Acharya
*
*/
public class ParentMapper extends Mapper<Object, Text, Object, Text> {
protected Configuration conf;

public ParentMapper() {
initialize();
}

private void initialize() {
conf = new Configuration();
}

protected void setup(
org.apache.hadoop.mapreduce.Mapper<Object, Text, Object, Text>.Context context)
throws java.io.IOException, InterruptedException {

Path[] uris = DistributedCache.getLocalCacheFiles(context
.getConfiguration());

BufferedReader fis;

/**
* Prepare Objects from Layout XML
*/
for (int i = 0; i < uris.length; i++) {
if (uris[i].toString().contains("layout")) {

String chunk = null;
fis = new BufferedReader(new FileReader(uris[i].toString()));
String records = "";
while ((chunk = fis.readLine()) != null) {
records += chunk;
}
// do whatever you like with xml using parser
System.out.println("Records :" + records);
}

}

};

protected void cleanup(
org.apache.hadoop.mapreduce.Mapper<Object, Text, Object, Text>.Context context)
throws java.io.IOException, InterruptedException {
DistributedCache.purgeCache(conf);
};

}


In the parent class we read the cache file in the setup method of the job. And cachePurge operation is performed at the cleanup phase of the MapReduce job.

Wednesday, September 07, 2011

Evaluating Javascript expressions in Java

As the java virtual machine comes up with Javascript runtime, we can directly evaluate JavaScript expressions in our Java Class. The approach could be helpful in conditions when we want certain kind of Javascript expressions to be evaluated in our Java class in server rather than the client side. JVM has the in built Script Engine called Mozilla Rhino. Rhino is an open-source implementation of JavaScript written in Java, which is used to run expressions given in Javascript from JVM.


Given is the ScriptingEngine info for Java virtual machine. This information can be obtained by running below method.

ScriptEngineManager mgr = new ScriptEngineManager();
List facts =
mgr.getEngineFactories();

System.out.println("engName :" + facts.getEngineName()
+ ":engVersion:" + facts.getEngineVersion()
+ ":langName :" + facts.getLanguageName()
+ ":langVersion :"+ factory.getLanguageVersion());


when the Engine Names is printed we can see the following results.

Script Engine: Mozilla Rhino (1.6 release 2)
Engine Alias: js
Engine Alias: rhino
Engine Alias: JavaScript
Engine Alias: javascript
Engine Alias: ECMAScript
Engine Alias: ecmascript
Language: ECMAScript (1.6)


Given below is the class which evaluates the JavaScript expression given to it. The evaluate method has to be given the expression to be evaluated, the setters names and the setterValueMap which contains the key,value pair of setterField and its corresponding value.

/**
*@author Bishal Acharya
*
*/
public class JavascriptEvaluator {
/**
*
* @param expression
* the expression to evaluate
* @param setters
* list of setters for the expression
* @param setterValueMap
* Map having setterKey and its value
* @return Object the evaluated expression
*/
public static Object evaluate(String expression, String setters,
Map<String, String> setterValueMap) {
String[] setterArray = setters.split(",");
List<String> setterList = new ArrayList<String>();

for (String val : setterArray) {
setterList.add(val.trim());
}

ScriptEngineManager mgr = new ScriptEngineManager();
ScriptEngine jsEngine = mgr.getEngineByName("JavaScript");
Object obj = null;
for (int i = 0; i < setterList.size(); i++) {
jsEngine.put(setterList.get(i),
setterValueMap.get(setterList.get(i)));
}
try {
obj = jsEngine
.eval("func1(); function func1(){" + expression + "}");
} catch (ScriptException e) {
e.printStackTrace();
}
return obj;
}

public static void main(String args[]) {
String expr = "return GROUP_NAME.substr(0,5).concat(' How are You');";
String setters = "GROUP_NAME";
Map<String, String> setterValueMap = new HashMap<String, String>();
setterValueMap.put("GROUP_NAME", "Hello World");
System.out.println(JavascriptEvaluator.evaluate(expr, setters,
setterValueMap));
}
}



Output of Running above class :

Hello How are You

Thursday, August 18, 2011

A JINI based approach towards dynamic cluster Management and Resource Utilization

Abstract
With offices nowadays have started using computers, we are also faced with a challenge to maximize the utilization of computing resources offered by each computer and minimize cost. With many computers, we are faced with many idle
resources. Jobs can be distributed out to idle servers or even idle desktops. Many of these resources remain idle during off office hours or even during office hours with many users under utilizing the computing as well as memory resources. We can manage policies allowing jobs to go only go to computers that are free of resources allowing others to run normally and hence maximize the throughput as well as minimize cost. Our proposed model not only utilizes resources to optimum but also makes the architecture more modular, adaptive, provides dynamic fail over recovery and linear scalability.

Keywords : JINI, javaspace, cluster, Space


1. Introduction

As the size of any organization increases, with it increases the issue of managing the increased resources. A little foresightedness may result saving huge cost for the organization. There might arise a question of managing computer clusters for carrying out computational task with more efficiency. In this paper we shall focus on the prototype cluster management using JINI, and also show how to setup a cluster management system that performs resource sharing. Traditional architectures normally focus on clientserver or peer to peer interaction model but our focus shall be upon a completely new architecture “Space based Architecture”. The space based idea
has several advantages compared to its counterparts. A space based architecture is said to be more robust because one agent failing will not bring down the whole system as is the case with clientserver model. Replication and mirroring of persistent spaces permits communication regardless network failure. Communication between peers is anonymous and asynchronous which makes computers in the cluster to work together to solve a problem collectively. These attributes of space based architecture enables us to make an adaptive cluster. We will particularly focus on managing clusters in an adaptive manner, where the increase or decrease in the number of peers wont create any problem to the overall space. Our approach will be based on the one of the services of JINI the “javaspace”.


2. JINI and JavaSpaces

JINI technology is a service oriented architecture that defines a programming model which both exploits and extends the ability of java technology to enable the creation of distributed systems consisting of federations of well behaved networked services and clients. JINI technology can be used to build adaptive network systems that are scalable, evolvable and flexible as typically required in dynamic distributed systems [1]. JINI enables computers to find each other and use each others services on the network without prior information about each other or the protocols used. To make Jini selfhealing,leases are utilized. Nearly every registration or resource must be leased, that is, it must periodically be confirmed that the registered resource is alive or that there is still interest in a resource.
If the lease is not renewed before its expiration, the resource or registration becomes unavailable. This provides a form of distributed garbage collection, where only healthy resources continue to be published.


2.1 An overview of JINI Infrastructure

Jini comes with several standard infrastructure components out of the box. To achieve the nonfunctional requirements (NFRs) of performance, resiliency, and scalability, multiple instances of these components run simultaneously on different machines.

The standard Jini services are:
· Lookup Service : The lookup service, named reggie, is the first among equals of Jini services.
All services in a Jini architecture register with the lookup service to make themselves available to other services. All initial access to other services is via the lookup service, after which clients bind directly.
· Class Server : The class server, a simple HTTP daemon, eliminates the coupling between clients and service implementations. Clients deal with interfaces. If a particular implementation class is required, it is downloaded transparently.
· Transaction Manager : Distributed transactions support is provided by the transaction manager service called mahalo.
· JavaSpace Services with a requirement to share information with other services do so through a JavaSpace. The reference implementation of the JavaSpace is named outrigger.

2.2 JavaSpace technology

The JavaSpaces technology is a highlevel tool for building distributed applications, and it can also be used as a coordination tool. A marked departure from classic distributed models that rely on message passing or RMI, the JavaSpaces model views a distributed application as a collection of processes that cooperate through the flow of objects into and out of one or more spaces. This programming model has its roots in Linda, a coordination language developed by Dr. David Gelernter at Yale niversity. However, no knowledge of Linda is required to understand and use JavaSpaces technology [2].The dominant model of computation in distributed computing is the ClientServer model. This model is based on the assumption that local procedure calls are the same as remote procedure calls. Javaspaces overcome the problems of synchronization, latency and partial failure, inherent in distributed systems, by providing loosely coupled interactions between the components of distributed systems. Javaspace processes communicate through a space, not than directly. Communication between processes on different physical machines is asynchronous and free from the main limitation of the traditional client/server model, when client/server communication requires simultaneous presence on network both parts client and server. Sender and receiver in JavaSpace don't need to be synchronized and can interact when network is available. In a distributed application, JavaSpaces technology acts as a virtual space between providers and requesters of network resources or objects. This allows participants in a distributed solution to exchange tasks, requests, and information in the form of Java
technology based objects[3]. The javaspace transactional mangement and notify feature makes it easier to build a dynamic cluster management framework. In particular it addresses the dynamic cluster problem where nodes can depart and join the cluster at any time.


2.3 Leasing

One of the important feature of Jini is the concept of leasing. All resources in a Jini system are leased,including proxy records in the lookup service, transactions, and, of course, memory in a JavaSpace.When a lease expires, the resource is recovered and made available to other components. This prevents resource accretion, a common problem in distributed systems. Leases can be renewed explicitly, or implicitly through a lease renewal manager. In the interests of simplicity, the example below uses leases that last "forever." This is obviously inappropriate for a
production system[4].

2.4 The Entry Interface

All objects that can be stored in a JavaSpace must implement the Entry interface. Entry is a simple tag interface that does not add any methods but does extend Serializable. Like JavaBeans, all Entry implementations must provide a public constructor that takes no arguments. Unlike JavaBeans, all of the data members of an Entry must be public. In production Jini systems, the public data members are a nonissue because of common techniques like the envelope letter idiom. The Entry implementation acts as an envelope or wrapper around the "real" payload, which may be any serializable Java object. The only public data members exposed are those required for the templatebased matching of the JavaSpaces API[5].


3. JavaSpace based cluster mangement

At the focus of our system is a working space and entries that cluster nodes can write to the space.These entries are a Join entry and a Depart entry. The space itself is assumed to be run on a host that forms the nucleus of the cluster and in fact for the work reported here we assume this host and its space stay up. We are working on the problem of making this space system properly persistent and robust
against temporary failure. It is further assumed that the space hosting node is well known to other nodes that participate in the cluster. This seems reasonable assumption for a cluster within an administrative boundary


3.1 Architecture Description





A JavaSpaces service holds entries, each of which is a typed group of objects expressed in a class that implements the interface net.jini.core.entry.Entry. Once an entry is written into a JavaSpaces service, it can be used in future lookup operations. Looking up entries is performed using templates, which are
entry objects that have some or all of their fields set to specified values that must be matched exactly. All remaining fields, which are not used in the lookup, are left as wildcards.
There are two lookup operations: read() and take(). The read() method returns either an entry that matches the template or an indication that no match was found. The take() method operates like read(), but if a match is found, the entry is removed from the space. Distributed events can be used by requesting a JavaSpaces service to notify you when an entry that matches the specified template is written into the space. Note that each entry in the space can be taken at most once, but two or more entries may have the exact same values. Using JavaSpaces technology, distributed applications are modeled as a flow of objects between participants, which is different from classic distributed models such as RMIs. Figure 1 indicates what a JavaSpaces technology based application looks like. A client can interact with as many JavaSpaces services as needed. Clients perform operations that map entries to templates onto JavaSpaces services. Such operations can be singleton or contained in a transaction so that all or none of the operationstake place. Notifications go to event catches, which can be either clients or proxies for clients.


4. Javaspace Realated concepts

4.1 Transactions

The JavaSpaces API uses the package net.jini.core.transaction to provide basic atomic
transactions that group multiple operations across multiple JavaSpaces services into a bundle that acts as a single atomic operation. Either all modifications within the transactions will be applied or none will, regardless of whether the transaction spans one or more operations or one or more JavaSpaces services. Note that transactions can span multiple spaces and participants in general.A read(), write(), or take() operation that has a null transaction acts as if it were in a committed transaction that contained that operation. As an example, a take() with a null
transaction parameter performs as if a transaction was created, the take() was performed under that transaction, and then the transaction was committed.

4.2 The Jini Outrigger JavaSpaces Service
The Jini Technology Starter Kit comes with the package com.sun.jini.outrigger, which
provides an implementation of a JavaSpaces technologyenabled
service. You can run it two ways:
· As a transient space that loses its state between executions: Use
com.sun.jini.outrigger.TransientOutriggerImpl.
· As a persistent space that maintains state between executions: Use
com.sun.jini.outrigger.PersistentOutriggerImpl.
The TransientOutriggerImpl can be run only as a nonactivatable server, but the
PersistentOutriggerImpl can be run as either an activatable or nonactivatable server.

4.3 Distributed Data structure in JavaSpace

With JavaSpace it is also possible to organize objects in form of a tree structure or an array. Since remote processes may access these structures concurrently, they are called distributed data structures. A channel in JavaSpaces terminology is a distributed data structure that organizes messages in a queue. Several processes can write messages to the end of the channel, and several processes can read or take messages from the beginning of it. A channel is made up of two pointer
objects, the head and the tail, which contain the numbers of the first and the last entry in the channel(Figure 2). It is possible to use several such channels, giving all Actors associated with a space the possibility to handle messages in a FIFO fair manner. Channels may also be bounded, meaning that an upper limit can be set for how many messages a channel may contain.




4.4 Master Worker pattern

The MasterWorker Pattern (sometimes called MasterSlave pattern) is used for parallel processing and is the basis pattern to work with javaspace. It follows a simple approach that allows applications to perform simultaneous processing across multiple machines or processes via a Master and multiple Workers. The Master hands out units of work to the "space", and these are read, processed and written back to the space by the workers. In a typical environment there are several "spaces", several masters
and many workers; the workers are usually designed to be generic, i.e. they can take any unit of work from the space and process the task.



5. Discussions and Conclusions

In this paper we have discussed using JINI/javaspace technology to providing clustering supporting.The approach presented is useful in places which requires clusters to be set up to perform resource intensive works, like data processing or computing works. Our model can be realized using JINI/javaspace technology which are open source technologies and hence can be cost effective as compared to other proprietary solutions. As with government offices, this approach can prove
beneficial as it provides effective solutions to clustering issues like scalability, fault tolerance,adaptability and utilization of resources. Creating adaptive systems in dynamic environments where services and clients come and go all the time and system components may dynamically be added and removed is a complex task. JavaSpaces has several features that can ease this task, including its ability to provide asynchronous and uncoupled communication in time, space and destination based on
associative addressing. Since a JavaSpace stores objects, it is a simple means of distributing both messages and agent behavior. Our space based architecture utilizes these possibilities together with the actor role abstraction to simplify the creation of adaptive systems. The architecture consists of three main types of agents that interact asynchronously through the space.



6 .References

[1] http://www.jini.org/wiki/Main_Page
[2] http://java.sun.com/developer/technicalArticles/tools/JavaSpaces/
[3] http://www.javafaq.nu/javaarticle150.
html
[4] http://www.softwarematters.org/jiniintro.
html
[5] http://www.artima.com/intv/swayP.html
[6]http://www.javaworld.com/javaworld/jw102000/
jw1002jiniology.
html?page=1
[7]http://java.sun.com/developer/technicalArticles/tools/JavaSpaces/
[8]http://www.theserverside.com/tt/articles/article.tss?l=UsingJavaSpaces
[9]http://www.artima.com/lejava/articles/dynamic_clustering.html
[10]http://www.artima.com/intv/cluster2.html
[11]Grid Computing: A Practical Guide To Technology And Applications By Ahmar
Abbas,Publisher:Charles River Media
[12]http://jan.newmarch.name/java/jini/tutorial/Jini.html
[13]Grid computing Software Environment and tools By : Omer F. Rana (Editor) and Jose C. Cunha
(Editor)
[14]http://java.sun.com/developer/Books/JavaSpaces/
[15]Dynamic Cluster Configuration and Management using JavaSpaces( K.A.Hawick and H.A.James
Computer Science Division, School of Informatics)