Lab 1: Virtual Technelogy: Kernel-based Virtual Machine (KVM)


為了避免輸入錯誤以及方便區分 IP 使用環境,我們使用下列幾個按鈕來快速取代各個不同 IP。


Install Oracle Java 7 (Debian)

Reference JavaPackage

Install java-package tool and download source file

  1. Oracle (
    1. The J2SE Development Kit (JDK), version 6 (update >= 10), 7
    2. The J2SE Runtime Environment (JRE), version 6 (update >= 10), 7
    3. The J2SE API Javadoc, version 6 (update >= 10), 7
sudo aptitude install java-package

(note that java-package is not in the main sources, you need to add contrib to your /etc/apt/sources.list)


make-jpkg jdk-7u*-linux-x64.tar.gz
Detected Debian build architecture: amd64 Detected Debian GNU type: x86_64-linux-gnu Detected product: Java(TM) Development Kit (JDK) Standard Edition, Version 1.7.0+update40 Oracle(TM), Inc. Is this correct [Y/n]: Y Checking free diskspace: done. In the next step, the binary file will be extracted. Probably a license agreement will be displayed. Please read this agreement carefully. If you do not agree to the displayed license terms, the package will not be built. Press [Return] to continue:
sudo dpkg -i oracle-j2sdk1.7_1.7.0+update*_amd64.deb


sudo update-alternatives --auto java

SSH key authentication

ssh-keygen -t rsa -f ~/.ssh/id_rsa -P ""
ssh-copy-id localhost
The authenticity of host 'localhost (::1)' can't be established. ECDSA key fingerprint is ****. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts. cloud@localhost's password: Now try logging into the machine, with "ssh 'localhost'", and check in: ~/.ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.
ssh localhost

Install hadoop 1.2.1

Create log directory

sudo mkdir /var/hadoop
sudo chown cloud:cloud /var/hadoop

Download hadoop 1.2.1 binary package

tar zxvf hadoop-1.2.1-bin.tar.gz
mv hadoop-1.2.1 hadoop
cd hadoop


  1. core-site.xml
    cat > conf/core-site.xml << EOF <configuration> <property> <name></name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/hadoop/hadoop-\${}</value> </property> </configuration> EOF
    cat conf/core-site.xml
    <configuration> <property> <name></name> <value>hdfs://localhost:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/var/hadoop/hadoop-\${}</value> </property> </configuration>
  2. hdfs-site.xml
    cat > conf/hdfs-site.xml << EOF <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> EOF
    cat conf/hdfs-site.xml
    <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
  3. mapred-site.xml
    cat > conf/mapred-site.xml << EOF <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration> EOF
    cat conf/mapred-site.xml
    <configuration> <property> <name>mapred.job.tracker</name> <value>localhost:9001</value> </property> </configuration>
  4. Edit

    Add/Edit the line "export JAVA_HOME" as following

    EDIT conf/
    export JAVA_HOME=/usr/lib/jvm/j2sdk1.7-oracle/jre

Format HDFS

bin/hadoop namenode -format

Launch hadoop


Check the service status by following links

Stop hadoop


Chi-Sheng Su