Linux ->> UBuntu 14.04 LTE下安装Hadoop 1.2.1(集群分布式模式)

安装步骤:

1) JDK -- Hadoop是用Java写的,不安装Java虚拟机怎么运行Hadoop的程序;

2)创建专门用于运行和执行hadoop任务(比如map和reduce任务)的linux用户,就像windows下的服务账户,并且授权给他可以访问运行JDK目录权限,让他可以执行java虚拟机。这个账户最后用来运行bin\.start_all.sh启动hadoop的所有服务,那运行账户肯定是他了,他肯定要有足够的权限。再者,需要配置这个账户的个人环境变量,把Java虚拟机的主目录地址环境变量之一,不然后面运行hadoop任务肯定报错,根本在当前的环境变量下找不到要运行的java命令。

2)修改/etc/hosts文件和/etc/hostname -- 集群模式下机器需要互相通信,靠的是IP地址。而通常我们利用的是主机名称,然后通过IP地址和主机名称的映射来和目标主机通信。那么我们自然就得改/etc/hosts文件和/etc/hostname,前者是主机和IP地址的映射关系,后者是主机的本地主机名;

4)SSH -- 集群机器间互相访问各自的资源需要建立连接通讯,使用的是SSH协议进行安全通道的数据交换,再者也需要利用SSH的授权证书办法来实现免密码登陆目标主机;

4)然后就是安装hadoop了。需要修改几个配置文件。hadoop的配置文件有很多种。

只读型配置文件:src/core/core-default.xml, src/hdfs/hdfs-default.xml, src/mapred/mapred-default.xml, conf/mapred-queues.xml。

定位设置:conf/core-site.xml, conf/hdfs-site.xml, conf/mapred-site.xml, conf/mapred-queues.xml。这种文件一般用于配置hadoop一些核心功能,比如hdfs和mapred的目录信息。

环境配置:conf/Hadoop-env.sh

说回正题,既然hadoop的核心是hdfs和mapreduce,那你至少要配置hdfs的namenode目录位置,datanode的目录位置,mapreduce的job tracker和task tracker通信的端口号,系统目录和本地目录等等。

这些在master和slave机子上都是相同的。

5)配置完后就是格式化hdfs了。在master机上格式化hdfs系统。

6)格式化完hdfs系统就启动所有的hdfs进程。

搭建环境用的是VMWare Workstation 12。这里用了三台Linux虚拟机:master,slave1,slave2。

具体配置:

master slave1 slave2
OS Ubuntu 14.04 LTE x64 Ubuntu 14.04 LTE x64 Ubuntu 14.04 LTE x64
memory 1GB 1GB 1GB
hard drive space 20GB 20GB 20GB
processors 2 2 2
IP Address 192.168.2.110 192.168.2.111 192.168.2.112
Roles NameNode
DataNode
JobTracker
TaskTracker
SecondaryNameNode
DataNodeTaskTracker DataNodeTaskTracker
hadoop directory /opt/hadoop /opt/hadoop /opt/hadoop
jdk version JDK 1.8 JDK 1.8 JDK 1.8

先在master机子上安装JDK和SSH

1. JDK安装

jerry@ubuntu:/run/network$ scp jerry@192.168.2.100:/home/jerry/Download/jdk-8u65-linux-x64.tar.gz ~ The authenticity of host \&\#39\;192.168.2.100 (192.168.2.100)\&\#39\; can\&\#39\;t be established. ECDSA key fingerprint is da:b7:c3:2a:ea:a2:76:4c:c3:c1:68:ca:0e:c2:ea:92. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added \&\#39\;192.168.2.100\&\#39\; (ECDSA) to the list of known hosts. jerry@192.168.2.100\&\#39\;s password: scp: /home/jerry/Download/jdk-8u65-linux-x64.tar.gz: No such file or directory jerry@ubuntu:/run/network$ scp jerry@192.168.2.100:/home/jerry/Downloads/jdk-8u65-linux-x64.tar.gz ~ jerry@192.168.2.100\&\#39\;s password: jdk-8u65-linux-x64.tar.gz 100% 173MB 21.6MB/s 00:08 jerry@ubuntu:/run/network$ cd ~ jerry@ubuntu:~$ ls Desktop Downloads jdk-8u65-linux-x64.tar.gz Pictures Templates Documents examples.desktop Music Public Videos jerry@ubuntu:~$ cd . jerry@ubuntu:~$ cd / jerry@ubuntu:/$ ls bin dev initrd.img lost+found opt run sys var boot etc lib media proc sbin tmp vmlinuz cdrom home lib64 mnt root srv usr jerry@ubuntu:/$ sudo mkdir jvm [sudo] password for jerry: jerry@ubuntu:/$ rm jvm/ rm: cannot remove ‘jvm/’: Is a directory jerry@ubuntu:/$ rm -d jvm/ rm: remove write-protected directory ‘jvm/’? y rm: cannot remove ‘jvm/’: Permission denied jerry@ubuntu:/$ ls bin dev initrd.img lib64 mnt root srv usr boot etc jvm lost+found opt run sys var cdrom home lib media proc sbin tmp vmlinuz jerry@ubuntu:/$ sudo mkdir /usr/lib/jvm jerry@ubuntu:/$ cd ~ jerry@ubuntu:~$ sudo tar zxf ./jdk-8u65-linux-x64.tar.gz -C /usr/lib/jvm/ jerry@ubuntu:~$ cd /usr/lib/jvm/ jerry@ubuntu:/usr/lib/jvm$ sudo mv jdk1.8.0_65 java jerry@ubuntu:/usr/lib/jvm$ cd java/ jerry@ubuntu:/usr/lib/jvm/java$ sudo vim ~/.bashrc jerry@ubuntu:/usr/lib/jvm/java$ tail -n 4 ~/.bashrc export JAVA_HOME=/usr/lib/jvm/java export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH jerry@ubuntu:/usr/lib/jvm/java$ java -version The program \&\#39\;java\&\#39\; can be found in the following packages: * default-jre * gcj-4.8-jre-headless * openjdk-7-jre-headless * gcj-4.6-jre-headless * openjdk-6-jre-headless Try: sudo apt-get install <selected package> jerry@ubuntu:/usr/lib/jvm/java$ sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/java/bin/java 300 update-alternatives: using /usr/lib/jvm/java/bin/java to provide /usr/bin/java (java) in auto mode jerry@ubuntu:/usr/lib/jvm/java$ sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/java/bin/javac 300 update-alternatives: using /usr/lib/jvm/java/bin/javac to provide /usr/bin/javac (javac) in auto mode jerry@ubuntu:/usr/lib/jvm/java$ sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/java/bin/jar 300 update-alternatives: using /usr/lib/jvm/java/bin/jar to provide /usr/bin/jar (jar) in auto mode jerry@ubuntu:/usr/lib/jvm/java$ sudo update-alternatives --install /usr/bin/javah javah /usr/lib/jvm/java/bin/javah 300 update-alternatives: using /usr/lib/jvm/java/bin/javah to provide /usr/bin/javah (javah) in auto mode jerry@ubuntu:/usr/lib/jvm/java$ sudo update-alternatives --install /usr/bin/javap javap /usr/lib/jvm/java/bin/javap 300 update-alternatives: using /usr/lib/jvm/java/bin/javap to provide /usr/bin/javap (javap) in auto mode jerry@ubuntu:/usr/lib/jvm/java$ sudo update-alternatives --config java There is only one alternative in link group java (providing /usr/bin/java): /usr/lib/jvm/java/bin/java Nothing to configure. jerry@ubuntu:/usr/lib/jvm/java$ java -version java version "1.8.0_65" Java(TM) SE Runtime Environment (build 1.8.0_65-b17) Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode) jerry@ubuntu:/usr/lib/jvm/java$

2. 添加hadoop用户和用户组

jerry@ubuntu:/usr/lib/jvm/java$ sudo groupadd hadoop [sudo] password for jerry: jerry@ubuntu:/usr/lib/jvm/java$ useradd hadoop -g hadoop useradd: Permission denied. useradd: cannot lock /etc/passwd; try again later. jerry@ubuntu:/usr/lib/jvm/java$ sudo useradd hadoop -g hadoop jerry@ubuntu:/usr/lib/jvm/java$

3. 修改/etc/hosts文件,配置master, slave1, slave2的主机IP地址和主机名的映射关系。修改/etc/hostname文件,配置本机的电脑名。

jerry@ubuntu:~$ cat /etc/hosts 127.0.0.1 localhost 127.0.1.1 master # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters master 192.168.2.110 slave1 192.168.2.111 slave2 192.168.2.112 jerry@ubuntu:~$ cat /etc/hostname master jerry@ubuntu:~$

4. 安装SSH

jerry@ubuntu:/usr/lib/jvm/java$ sudo apt-get install ssh Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libck-connector0 ncurses-term openssh-client openssh-server openssh-sftp-server ssh-import-id Suggested packages: libpam-ssh keychain monkeysphere rssh molly-guard The following NEW packages will be installed: libck-connector0 ncurses-term openssh-server openssh-sftp-server ssh ssh-import-id The following packages will be upgraded: openssh-client 1 upgraded, 6 newly installed, 0 to remove and 239 not upgraded. Need to get 617 kB/1,181 kB of archives. After this operation, 3,450 kB of additional disk space will be used. Do you want to continue? [Y/n] Y Get:1 http://us.archive.ubuntu.com/ubuntu/ trusty/main libck-connector0 amd64 0.4.5-3.1ubuntu2 [10.5 kB] Get:2 http://us.archive.ubuntu.com/ubuntu/ trusty/main ncurses-term all 5.9+20140118-1ubuntu1 [243 kB] Get:3 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main openssh-sftp-server amd64 1:6.6p1-2ubuntu2.3 [34.1 kB] Get:4 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main openssh-server amd64 1:6.6p1-2ubuntu2.3 [319 kB] Get:5 http://us.archive.ubuntu.com/ubuntu/ trusty-updates/main ssh all 1:6.6p1-2ubuntu2.3 [1,114 B] Get:6 http://us.archive.ubuntu.com/ubuntu/ trusty/main ssh-import-id all 3.21-0ubuntu1 [9,624 B] Fetched 617 kB in 16s (37.3 kB/s) Preconfiguring packages ... Selecting previously unselected package libck-connector0:amd64. (Reading database ... 166519 files and directories currently installed.) Preparing to unpack .../libck-connector0_0.4.5-3.1ubuntu2_amd64.deb ... Unpacking libck-connector0:amd64 (0.4.5-3.1ubuntu2) ... Preparing to unpack .../openssh-client_1%3a6.6p1-2ubuntu2.3_amd64.deb ... Unpacking openssh-client (1:6.6p1-2ubuntu2.3) over (1:6.6p1-2ubuntu2) ... Selecting previously unselected package ncurses-term. Preparing to unpack .../ncurses-term_5.9+20140118-1ubuntu1_all.deb ... Unpacking ncurses-term (5.9+20140118-1ubuntu1) ... Selecting previously unselected package openssh-sftp-server. Preparing to unpack .../openssh-sftp-server_1%3a6.6p1-2ubuntu2.3_amd64.deb ... Unpacking openssh-sftp-server (1:6.6p1-2ubuntu2.3) ... Selecting previously unselected package openssh-server. Preparing to unpack .../openssh-server_1%3a6.6p1-2ubuntu2.3_amd64.deb ... Unpacking openssh-server (1:6.6p1-2ubuntu2.3) ... Selecting previously unselected package ssh. Preparing to unpack .../ssh_1%3a6.6p1-2ubuntu2.3_all.deb ... Unpacking ssh (1:6.6p1-2ubuntu2.3) ... Selecting previously unselected package ssh-import-id. Preparing to unpack .../ssh-import-id_3.21-0ubuntu1_all.deb ... Unpacking ssh-import-id (3.21-0ubuntu1) ... Processing triggers for man-db (2.6.7.1-1ubuntu1) ... Processing triggers for ureadahead (0.100.0-16) ... ureadahead will be reprofiled on next reboot Processing triggers for ufw (0.34~rc-0ubuntu2) ... Setting up libck-connector0:amd64 (0.4.5-3.1ubuntu2) ... Setting up openssh-client (1:6.6p1-2ubuntu2.3) ... Setting up ncurses-term (5.9+20140118-1ubuntu1) ... Setting up openssh-sftp-server (1:6.6p1-2ubuntu2.3) ... Setting up openssh-server (1:6.6p1-2ubuntu2.3) ... Creating SSH2 RSA key; this may take some time ... Creating SSH2 DSA key; this may take some time ... Creating SSH2 ECDSA key; this may take some time ... Creating SSH2 ED25519 key; this may take some time ... ssh start/running, process 7611 Setting up ssh-import-id (3.21-0ubuntu1) ... Processing triggers for ureadahead (0.100.0-16) ... Processing triggers for ufw (0.34~rc-0ubuntu2) ... Setting up ssh (1:6.6p1-2ubuntu2.3) ... Processing triggers for libc-bin (2.19-0ubuntu6.6) ...

5. 解压hadoop压缩包到本地,修改配置文件

jerry@ubuntu:~$ sudo tar zxf hadoop-1.2.1.tar.gz -C /opt/ [sudo] password for jerry: jerry@ubuntu:~$ cd /opt/hadoop-1.2.1/ jerry@ubuntu:/opt/hadoop-1.2.1$ ls bin hadoop-ant-1.2.1.jar ivy sbin build.xml hadoop-client-1.2.1.jar ivy.xml share c++ hadoop-core-1.2.1.jar lib src CHANGES.txt hadoop-examples-1.2.1.jar libexec webapps conf hadoop-minicluster-1.2.1.jar LICENSE.txt contrib hadoop-test-1.2.1.jar NOTICE.txt docs hadoop-tools-1.2.1.jar README.txt jerry@ubuntu:/opt/hadoop-1.2.1$ cd conf/ jerry@ubuntu:/opt/hadoop-1.2.1/conf$ ls capacity-scheduler.xml hadoop-policy.xml slaves configuration.xsl hdfs-site.xml ssl-client.xml.example core-site.xml log4j.properties ssl-server.xml.example fair-scheduler.xml mapred-queue-acls.xml taskcontroller.cfg hadoop-env.sh mapred-site.xml task-log4j.properties hadoop-metrics2.properties masters jerry@ubuntu:/opt/hadoop-1.2.1/conf$ vim hadoop-env.sh jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim hadoop-env.sh jerry@ubuntu:/opt/hadoop-1.2.1/conf$ tail -n 1 hadoop-env.sh export JAVA_HOME=/usr/lib/jvm/java jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim core-site.xml jerry@ubuntu:/opt/hadoop-1.2.1/conf$ cat core-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/hadoop/hadooptmp</value> </property> </configuration> jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim hdfs-site.xml jerry@ubuntu:/opt/hadoop-1.2.1/conf$ cat hdfs-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.name.dir</name> <value>/hadoop/hdfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>/hadoop/hdfs/data</value> </property> </configuration> jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim mapred- jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim mapred-site.xml jerry@ubuntu:/opt/hadoop-1.2.1/conf$ cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> </configuration> jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim mapred-site.xml jerry@ubuntu:/opt/hadoop-1.2.1/conf$ cat mapred-site.xml <?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> <property> <name>mapred.system.dir</name> <value>/hadoop/mapred/mapred_system</value> </property> <property> <name>mapred.local.dir</name> <value>/hadoop/mapred/mapred_local</value> </property> </configuration> jerry@ubuntu:/opt/hadoop-1.2.1/conf$ ls capacity-scheduler.xml hadoop-policy.xml slaves configuration.xsl hdfs-site.xml ssl-client.xml.example core-site.xml log4j.properties ssl-server.xml.example fair-scheduler.xml mapred-queue-acls.xml taskcontroller.cfg hadoop-env.sh mapred-site.xml task-log4j.properties hadoop-metrics2.properties masters jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim masters jerry@ubuntu:/opt/hadoop-1.2.1/conf$ sudo vim slaves jerry@ubuntu:/opt/hadoop-1.2.1/conf$ cat masters master jerry@ubuntu:/opt/hadoop-1.2.1/conf$ cat slaves master slave1 slave2 jerry@ubuntu:/opt/hadoop-1.2.1/conf$

在master、slave1和slave2上的hadoop用户目录下创建好.ssh文件夹

jerry@slave1:~$ sudo mkdir -p /hadoop/hdfs/name [sudo] password for jerry: jerry@slave1:~$ sudo mkdir -p /hadoop/hdfs/data jerry@slave1:~$ sudo mkdir -p /hadoop/mapred/mapred_system jerry@slave1:~$ sudo mkdir -p /hadoop/mapred/mapred_local jerry@slave1:~$ sudo mkdir -p /hadoop/hadooptmp jerry@slave1:~$ chown -R hadoop:hadoop /hadoop/ chown: changing ownership of ‘/hadoop/mapred/mapred_local’: Operation not permitted chown: changing ownership of ‘/hadoop/mapred/mapred_system’: Operation not permitted chown: changing ownership of ‘/hadoop/mapred’: Operation not permitted chown: changing ownership of ‘/hadoop/hdfs/data’: Operation not permitted chown: changing ownership of ‘/hadoop/hdfs/name’: Operation not permitted chown: changing ownership of ‘/hadoop/hdfs’: Operation not permitted chown: changing ownership of ‘/hadoop/hadooptmp’: Operation not permitted chown: changing ownership of ‘/hadoop/’: Operation not permitted jerry@slave1:~$ sudo chown -R hadoop:hadoop /hadoop/ jerry@slave1:~$

在master上生存ssh秘钥然后拷贝到slave1和slave2上的hadoop用户主目录下的.ssh文件目录下

hadoop@master:~$ sudo mkdir ~/.ssh [sudo] password for hadoop: hadoop@master:~$ sudo chown -R hadoop:hadoop ~/.ssh hadoop@master:~$ ssh-keygen -t rsa -P \&\#39\;\&\#39\; -f ~/.ssh/id_rsa Generating public/private rsa key pair. Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 36:99:e0:e9:aa:b8:c3:c6:cd:97:59:9f:49:87:0d:bf hadoop@master The key\&\#39\;s randomart image is: +--[ RSA 2048]----+ | | | | | . | | . o + | | o S = | | . o + + | |o o = o + . | |.= o = + E | |+o..o | +-----------------+ hadoop@master:~/.ssh$ scp authorized_keys slave1:/home/hadoop/.ssh The authenticity of host \&\#39\;slave1 (192.168.2.111)\&\#39\; can\&\#39\;t be established. ECDSA key fingerprint is 48:93:30:0d:bb:3a:85:da:46:3f:75:76:3e:b7:42:6a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added \&\#39\;slave1,192.168.2.111\&\#39\; (ECDSA) to the list of known hosts. hadoop@slave1\&\#39\;s password: scp: /home/hadoop/.ssh/authorized_keys: Permission denied hadoop@master:~/.ssh$ scp authorized_keys slave1:/home/hadoop/.ssh hadoop@slave1\&\#39\;s password: authorized_keys 100% 395 0.4KB/s 00:00 hadoop@master:~/.ssh$ scp authorized_keys slave2:/home/hadoop/.ssh The authenticity of host \&\#39\;slave2 (192.168.2.112)\&\#39\; can\&\#39\;t be established. ECDSA key fingerprint is 48:93:30:0d:bb:3a:85:da:46:3f:75:76:3e:b7:42:6a. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added \&\#39\;slave2,192.168.2.112\&\#39\; (ECDSA) to the list of known hosts. hadoop@slave2\&\#39\;s password: authorized_keys 100% 395 0.4KB/s 00:00 hadoop@master:~/.ssh$

再测试下是否再次ssh连接的时候需要键入密码。结果是不需要的。

hadoop@master:~/.ssh$ ssh slave1 Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.19.0-42-generic x86_64) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. $ ls -al total 239436 drwxr-xr-x 12 hadoop hadoop 4096 Jan 9 22:37 . drwxr-xr-x 4 root root 4096 Jan 9 22:25 .. drwx------ 2 hadoop hadoop 4096 Jan 9 22:37 .cache drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Desktop drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Documents drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Downloads -rw-r--r-- 1 hadoop hadoop 8980 Dec 27 03:27 examples.desktop -rw-rw-r-- 1 hadoop hadoop 63851630 Dec 30 05:25 hadoop-1.2.1.tar.gz -rw-rw-r-- 1 hadoop hadoop 181260798 Dec 27 06:49 jdk-8u65-linux-x64.tar.gz drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Music drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Pictures drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Public drwxr-xr-x 2 hadoop hadoop 4096 Jan 9 22:39 .ssh drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Templates drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Videos $ hadoop@master:~/.ssh$ ssh slave2 Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.19.0-42-generic x86_64) * Documentation: https://help.ubuntu.com/ 27 packages can be updated. 23 updates are security updates. $ ls -l total 239424 drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Desktop drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Documents drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Downloads -rw-r--r-- 1 hadoop hadoop 8980 Dec 27 03:27 examples.desktop -rw-rw-r-- 1 hadoop hadoop 63851630 Dec 30 05:25 hadoop-1.2.1.tar.gz -rw-rw-r-- 1 hadoop hadoop 181260798 Dec 27 06:49 jdk-8u65-linux-x64.tar.gz drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Music drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Pictures drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Public drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Templates drwxr-xr-x 2 hadoop hadoop 4096 Dec 27 04:19 Videos $ exit Connection to slave2 closed. hadoop@master:~/.ssh$

格式化hdfs的namenode

hadoop@master:/opt/hadoop-1.2.1/bin$ ./hadoop namenode -format16/01/09 23:33:18 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master/127.0.1.1 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 1.2.1 STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by \&\#39\;mattf\&\#39\; on Mon Jul 22 15:23:09 PDT 2013 STARTUP_MSG: java = 1.8.0_65 ************************************************************/ Re-format filesystem in /hadoop/hdfs/name ? (Y or N) Y 16/01/09 23:34:09 INFO util.GSet: Computing capacity for map BlocksMap 16/01/09 23:34:09 INFO util.GSet: VM type = 64-bit 16/01/09 23:34:09 INFO util.GSet: 2.0% max memory = 1013645312 16/01/09 23:34:09 INFO util.GSet: capacity = 2^21 = 2097152 entries 16/01/09 23:34:09 INFO util.GSet: recommended=2097152, actual=2097152 16/01/09 23:34:10 INFO namenode.FSNamesystem: fsOwner=hadoop 16/01/09 23:34:10 INFO namenode.FSNamesystem: supergroup=supergroup 16/01/09 23:34:10 INFO namenode.FSNamesystem: isPermissionEnabled=true 16/01/09 23:34:10 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100 16/01/09 23:34:10 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 16/01/09 23:34:10 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0 16/01/09 23:34:10 INFO namenode.NameNode: Caching file names occuring more than 10 times 16/01/09 23:34:11 INFO common.Storage: Image file /hadoop/hdfs/name/current/fsimage of size 112 bytes saved in 0 seconds. 16/01/09 23:34:11 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/hadoop/hdfs/name/current/edits 16/01/09 23:34:11 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/hadoop/hdfs/name/current/edits 16/01/09 23:34:11 INFO common.Storage: Storage directory /hadoop/hdfs/name has been successfully formatted. 16/01/09 23:34:11 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/127.0.1.1 ************************************************************/ hadoop@master:/opt/hadoop-1.2.1/bin$

在master上启动hadoop的所有服务

hadoop@master:/opt/hadoop-1.2.1/bin$ ./start-all.sh starting namenode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-namenode-master.out slave2: starting datanode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave2.out slave1: starting datanode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-slave1.out master: starting datanode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-datanode-master.out master: starting secondarynamenode, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-secondarynamenode-master.out starting jobtracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-jobtracker-master.out slave2: starting tasktracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave2.out slave1: starting tasktracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-slave1.out master: starting tasktracker, logging to /opt/hadoop-1.2.1/libexec/../logs/hadoop-hadoop-tasktracker-master.out hadoop@master:/opt/hadoop-1.2.1/bin$ jps 5088 JobTracker 5010 SecondaryNameNode 4871 DataNode 4732 NameNode 5277 Jps 5230 TaskTracker hadoop@master:/opt/hadoop-1.2.1/bin$

观察slave1

hadoop@slave1:~/.ssh$ jps 3669 TaskTracker 3718 Jps 3560 DataNode hadoop@slave1:~/.ssh$

观察slave2

hadoop@slave2:~$ jps 3216 Jps 2982 TaskTracker 3095 DataNode

过程中需要注意的问题:

你既然是用hadoop用户去启动hadoop服务的,那这个用户至少得对java、hadoop程序有访问执行权限吧,这点要注意。

其次,这个用户对hadoop的hdfs的文件和mapreduce文件都需要有写入权限吧

问题一:

hadoop@master:/usr/lib/jvm/java/bin$ jps The program \&\#39\;jps\&\#39\; can be found in the following packages: * openjdk-7-jdk * openjdk-6-jdk Ask your administrator to install one of them

明明都安装了jdk,而且环境变量也没有出现路径错误,为什么就是找不到jps命令呢?结果还是要用update-alternative命令来解决这个问题

hadoop@master:/usr/lib/jvm/java/bin$ sudo update-alternatives --install /usr/bin/jps jps /usr/lib/jvm/java/bin/jps 1 update-alternatives: using /usr/lib/jvm/java/bin/jps to provide /usr/bin/jps (jps) in auto mode

;