flume 1.5.0收集日志写入到hadoop中报java.lang.NoClassDefFoundError: org/apache/hadoop/io/Seq...

我这边使用的是官方原版的flume,不是CDH.写前一篇文章的时候,挺顺利的,没出现这个问题

那是因为,前一篇文章,我flume和hadoop安装在同一台机器上了, 所以前面没有出现这个问题.其实启动的时候,我们就能看到相关信息

# ./bin/flume-ng agent --conf conf --conf-file ./conf/hdfs.conf --name m2m -Dflume.root.logger=INFO,LOGFILE Info: Including Hadoop libraries found via (/usr/local/hadoop-2.5.2/bin/hadoop) for HDFS access Info: Excluding /usr/local/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar from classpath Info: Excluding /usr/local/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar from classpath + exec /usr/local/jdk1.7.0_51/bin/java -Xmx20m -Dflume.root.logger=INFO,LOGFILE -cp \&\#39\;/root/apache-flume-1.5.0-bin/conf:/root/apachadoop-2.5.2/etc/hadoop:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/cr:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.5.2/share/hadoop/commsr/local/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.5.2/shils-core-1.8.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/commoal/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-com5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jaop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.5o-2.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/coadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/usr/local/hadoop-2.5.2/2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/hp-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/uommon/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadooson-xc-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.5.2/share/hadoopr:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jaxb-apshare/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadosey-json-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/commodoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hjetty-util-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.5.2/share/hadoop/commonop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadooj-1.2.17.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/liadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usmmon/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.5.2/sharar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/usmmon/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/has.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/jdiff:/usr/loclib:/usr/local/hadoop-2.5.2/share/hadoop/common/sources:/usr/local/hadoop-2.5.2/share/hadoop/common/templates:/usr/local/hadoop-2doop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-codec-1.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/loop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/ommons-logging-1.1.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfsr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-run5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hado-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.5.2/shl.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/usr/local/hadoop-2.5.2/shsts.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/jdiff:/usr//lib:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/sources:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/templates:/usr/local/hadoop-2.5.al/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/llib/asm-3.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/cop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1adoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.5.-2.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/gu.2/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.5.-asl-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/sr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jaop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.5.2/share/r:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.5.2/sr:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.doop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.5.2/share/hadoosr/local/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.5.2/share/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/lhadoop-yarn-api-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/usr/locaoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/usrn/hadoop-yarn-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/yarn/hadoop-yarn-server-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/usr//hadoop-yarn-server-resourcemanager-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/usr/lhadoop-yarn-server-web-proxy-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib:/usr/local/hadoop-2.5.2/share/hadoop/yarn/sohadoop/yarn/test:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/maproop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jarp/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.5.2/share3.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jacl/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.joop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoopnit-4.11.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapre/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.joop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hib/xz-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/usr/local/hadoop-2.5.2/share/ient-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/usr/local/hadoop-2.5reduce-client-hs-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/usr/local/hadoop-mapreduce-client-jobclient-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreducep-2.5.2/share/hadoop/mapreduce/lib:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib-examples:/usr/local/hadoop-2.5.2/share/hadoop-2.5.2/contrib/capacity-scheduler/*.jar\&\#39\; -Djava.library.path=:/usr/local/hadoop-2.5.2/lib/native org.apache.flume.node.Applicatame m2m # ./bin/flume-ngagent --confconf --conf-file ./conf/hdfs.conf --namem2m -Dflume.root.logger=INFO,LOGFILE Info: IncludingHadooplibrariesfoundvia (/usr/local/hadoop-2.5.2/bin/hadoop) for HDFSaccess Info: Excluding /usr/local/hadoop-2.5.2/share/hadoop/common/lib/slf4j-api-1.7.5.jar fromclasspath Info: Excluding /usr/local/hadoop-2.5.2/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar fromclasspath + exec /usr/local/jdk1.7.0_51/bin/java -Xmx20m -Dflume.root.logger=INFO,LOGFILE -cp \&\#39\;/root/apache-flume-1.5.0-bin/conf:/root/apachadoop-2.5.2/etc/hadoop:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/cr:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.5.2/share/hadoop/commsr/local/hadoop-2.5.2/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/asm-3.2.jar:/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.5.2/shils-core-1.8.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/commoal/hadoop-2.5.2/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-com5.2/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-digester-1.8.jaop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.5o-2.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/coadoop-2.5.2/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/commons-net-3.1.jar:/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/hadoop-annotations-2.5.2.jar:/usr/local/hadoop-2.5.2/2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/hp-2.5.2/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/uommon/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadooson-xc-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.5.2/share/hadoopr:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jaxb-apshare/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadosey-json-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/commodoop-2.5.2/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hjetty-util-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.5.2/share/hadoop/commonop-2.5.2/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadooj-1.2.17.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/liadoop-2.5.2/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usmmon/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.5.2/sharar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/lib/xz-1.0.jar:/usmmon/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/has.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/hadoop-nfs-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/common/jdiff:/usr/loclib:/usr/local/hadoop-2.5.2/share/hadoop/common/sources:/usr/local/hadoop-2.5.2/share/hadoop/common/templates:/usr/local/hadoop-2doop-2.5.2/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-codec-1.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/loop-2.5.2/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/ommons-logging-1.1.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfsr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jasper-run5.2/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hado-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/jsp/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.5.2/shl.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/lib/servlet/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-2.5.2.jar:/usr/local/hadoop-2.5.2/shsts.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/hadoop-hdfs-nfs-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/jdiff:/usr//lib:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/sources:/usr/local/hadoop-2.5.2/share/hadoop/hdfs/templates:/usr/local/hadoop-2.5.al/hadoop-2.5.2/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/llib/asm-3.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/cop-2.5.2/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-compress-1.4.1adoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.5.-2.6.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/gu.2/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.5.-asl-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/sr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/javax.inject-1.jaop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.5.2/share/r:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-guice-1.hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.5.2/sr:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jetty-util-6.1.26.doop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.5.2/share/hadoosr/local/hadoop-2.5.2/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.5.2/share/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/lhadoop-yarn-api-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.5.2.jar:/usr/locaoop-yarn-applications-unmanaged-am-launcher-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-client-2.5.2.jar:/usrn/hadoop-yarn-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.5.2.jar:/yarn/hadoop-yarn-server-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.5.2.jar:/usr//hadoop-yarn-server-resourcemanager-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/hadoop-yarn-server-tests-2.5.2.jar:/usr/lhadoop-yarn-server-web-proxy-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/yarn/lib:/usr/local/hadoop-2.5.2/share/hadoop/yarn/sohadoop/yarn/test:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/maproop-2.5.2/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jarp/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.5.2/share3.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/hadoop-annotations-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jacl/hadoop-2.5.2/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-core-1.9.joop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoopnit-4.11.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapre/hadoop-2.5.2/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/paranamer-2.3.joop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hib/xz-1.0.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.5.2.jar:/usr/local/hadoop-2.5.2/share/ient-common-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.5.2.jar:/usr/local/hadoop-2.5reduce-client-hs-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.5.2.jar:/usr/local/hadoop-mapreduce-client-jobclient-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.5/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.5.2.jar:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/hadoop-mapreducep-2.5.2/share/hadoop/mapreduce/lib:/usr/local/hadoop-2.5.2/share/hadoop/mapreduce/lib-examples:/usr/local/hadoop-2.5.2/share/hadoop-2.5.2/contrib/capacity-scheduler/*.jar\&\#39\; -Djava.library.path=:/usr/local/hadoop-2.5.2/lib/native org.apache.flume.node.Applicatamem2m

我们看第一句

Info: Including Hadoop libraries found via (/usr/local/hadoop-2.5.2/bin/hadoop) for HDFS access Info: IncludingHadooplibrariesfoundvia (/usr/local/hadoop-2.5.2/bin/hadoop) for HDFSaccess

它通过hadoop命令,自动去找到了hadoop的所有jra包并添加到了自己的classpath里面,所以就不存在这个错了.

那它到底依赖哪些jar包呢? 我也是快崩溃了,一个一个的找,一个一个的试,最终试出来了结果.

其实报错的那个类,在hadoop2的时候,是在hadoop-common当中. hadoop 1的话,应该是在hadoop-core中.

首先把这个hadoop-common包拷贝到flume的lib目录里面.

然后启动,再看还是有报错

29 Feb 2016 05:57:10,372 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcher Runnable.run:145) - Failed to start agent because dependencies were not found in classpath. Error follows. java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:38) at org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:36) at org.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:108) at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:210) at org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:553) at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:272) at org.apache.flume.conf.Configurables.configure(Configurables.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:418) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140) 29 Feb 2016 05:57:10,372 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcher Runnable.run:145) - Failedto startagentbecausedependencieswerenot foundin classpath. Errorfollows. java.lang.NoClassDefFoundError: org/apache/commons/configuration/Configuration atorg.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<init>(DefaultMetricsSystem.java:38) atorg.apache.hadoop.metrics2.lib.DefaultMetricsSystem.<clinit>(DefaultMetricsSystem.java:36) atorg.apache.hadoop.security.UserGroupInformation$UgiMetrics.create(UserGroupInformation.java:108) atorg.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:210) atorg.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:553) atorg.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:272) atorg.apache.flume.conf.Configurables.configure(Configurables.java:41) atorg.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:418) atorg.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103) atorg.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140)

这个是commons-configuration这个包中,还是从hadoop的安装目录中找,然后拷贝进去.

再启动看

报错又来了

29 Feb 2016 07:28:48,615 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:145) - Failed to start agent because dependencies were not found in classpath. Error follows. java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName at org.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:339) at org.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:384) at org.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:553) at org.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:272) at org.apache.flume.conf.Configurables.configure(Configurables.java:41) at org.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:418) at org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103) at org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 15 more 29 Feb 2016 07:28:48,615 ERROR [conf-file-poller-0] (org.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run:145) - Failedto startagentbecausedependencieswerenot foundin classpath. Errorfollows. java.lang.NoClassDefFoundError: org/apache/hadoop/util/PlatformName atorg.apache.hadoop.security.UserGroupInformation.getOSLoginModuleName(UserGroupInformation.java:339) atorg.apache.hadoop.security.UserGroupInformation.<clinit>(UserGroupInformation.java:384) atorg.apache.flume.sink.hdfs.HDFSEventSink.authenticate(HDFSEventSink.java:553) atorg.apache.flume.sink.hdfs.HDFSEventSink.configure(HDFSEventSink.java:272) atorg.apache.flume.conf.Configurables.configure(Configurables.java:41) atorg.apache.flume.node.AbstractConfigurationProvider.loadSinks(AbstractConfigurationProvider.java:418) atorg.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:103) atorg.apache.flume.node.PollingPropertiesFileConfigurationProvider$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:140) atjava.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) atjava.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) atjava.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) atjava.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) atjava.lang.Thread.run(Thread.java:745) Causedby: java.lang.ClassNotFoundException: org.apache.hadoop.util.PlatformName atjava.net.URLClassLoader.findClass(URLClassLoader.java:381) atjava.lang.ClassLoader.loadClass(ClassLoader.java:424) atsun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) atjava.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 15 more

这个class是在hadoop-common包当中,继续查找,然后放到flume的lib目录下.

再启动看

嗯,启动是不报错了.我们开个测试用户写日志看看

29 Feb 2016 07:31:03,316 INFO [lifecycleSupervisor-1-3] (org.apache.flume.source.AvroSource.start:220) - Starting Avro source r1: { bindAddress: 0.0.0.0, port: 44446 }... 29 Feb 2016 07:31:03,532 INFO [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean. 29 Feb 2016 07:31:03,532 INFO [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Component type: SOURCE, name: r1 started 29 Feb 2016 07:31:03,533 INFO [lifecycleSupervisor-1-3] (org.apache.flume.source.AvroSource.start:245) - Avro source r1 started. 29 Feb 2016 07:34:11,558 INFO [New I/O server boss #1 ([id: 0x2a0f7a1a, /0:0:0:0:0:0:0:0:44446])] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0x5175bb19, /192.168.89.131:33596 => /192.168.89.93:44446] OPEN 29 Feb 2016 07:34:11,569 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0x5175bb19, /192.168.89.131:33596 => /192.168.89.93:44446] BOUND: /192.168.89.93:44446 29 Feb 2016 07:34:11,569 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0x5175bb19, /192.168.89.131:33596 => /192.168.89.93:44446] CONNECTED: /192.168.89.131:33596 29 Feb 2016 07:34:13,331 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSSequenceFile.configure:63) - writeFormat = Writable, UseRawLocalFileSystem = false 29 Feb 2016 07:34:13,348 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creating hdfs://192.168.92:9000/user/flume/syslogtcp/Syslog.1456749253331.tmp 29 Feb 2016 07:34:13,517 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:463) - HDFS IO error java.io.IOException: No FileSystem for scheme: hdfs at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2579) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2586) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:270) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262) at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:718) at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183) at org.apache.flume.sink.hdfs.BucketWriter.access$1700(BucketWriter.java:59) at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:715) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 29 Feb 2016 07:31:03,316 INFO [lifecycleSupervisor-1-3] (org.apache.flume.source.AvroSource.start:220) - StartingAvrosourcer1: { bindAddress: 0.0.0.0, port: 44446 }... 29 Feb 2016 07:31:03,532 INFO [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitoredcountergroupfor type: SOURCE, name: r1: Successfullyregisterednew MBean. 29 Feb 2016 07:31:03,532 INFO [lifecycleSupervisor-1-3] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Componenttype: SOURCE, name: r1started 29 Feb 2016 07:31:03,533 INFO [lifecycleSupervisor-1-3] (org.apache.flume.source.AvroSource.start:245) - Avrosourcer1started. 29 Feb 2016 07:34:11,558 INFO [New I/O serverboss #1 ([id: 0x2a0f7a1a, /0:0:0:0:0:0:0:0:44446])] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0x5175bb19, /192.168.89.131:33596 => /192.168.89.93:44446] OPEN 29 Feb 2016 07:34:11,569 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0x5175bb19, /192.168.89.131:33596 => /192.168.89.93:44446] BOUND: /192.168.89.93:44446 29 Feb 2016 07:34:11,569 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0x5175bb19, /192.168.89.131:33596 => /192.168.89.93:44446] CONNECTED: /192.168.89.131:33596 29 Feb 2016 07:34:13,331 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSSequenceFile.configure:63) - writeFormat = Writable, UseRawLocalFileSystem = false 29 Feb 2016 07:34:13,348 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creatinghdfs://192.168.92:9000/user/flume/syslogtcp/Syslog.1456749253331.tmp 29 Feb 2016 07:34:13,517 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:463) - HDFSIOerror java.io.IOException: NoFileSystemfor scheme: hdfs atorg.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2579) atorg.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2586) atorg.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) atorg.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625) atorg.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607) atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) atorg.apache.hadoop.fs.Path.getFileSystem(Path.java:296) atorg.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:270) atorg.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262) atorg.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:718) atorg.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183) atorg.apache.flume.sink.hdfs.BucketWriter.access$1700(BucketWriter.java:59) atorg.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:715) atjava.util.concurrent.FutureTask.run(FutureTask.java:266) atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) atjava.lang.Thread.run(Thread.java:745)

错误又来了,这个class是在hadoop-hdfs包中,再继续查找,拷贝

再试试看启动

29 Feb 2016 07:36:31,237 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creating hdfs://192.168.92:9000/user/flume/syslogtcp/Syslog.1456749348984.tmp 29 Feb 2016 07:36:31,238 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:463) - HDFS IO error java.io.IOException: Incomplete HDFS URI, no host: hdfs://192.168.92:9000/user/flume/syslogtcp/Syslog.1456749348984.tmp at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:270) at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262) at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:718) at org.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183) at org.apache.flume.sink.hdfs.BucketWriter.access$1700(BucketWriter.java:59) at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:715) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 29 Feb 2016 07:36:31,237 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creatinghdfs://192.168.92:9000/user/flume/syslogtcp/Syslog.1456749348984.tmp 29 Feb 2016 07:36:31,238 WARN [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSEventSink.process:463) - HDFSIOerror java.io.IOException: IncompleteHDFSURI, nohost: hdfs://192.168.92:9000/user/flume/syslogtcp/Syslog.1456749348984.tmp atorg.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:136) atorg.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591) atorg.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89) atorg.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625) atorg.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607) atorg.apache.hadoop.fs.FileSystem.get(FileSystem.java:368) atorg.apache.hadoop.fs.Path.getFileSystem(Path.java:296) atorg.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:270) atorg.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:262) atorg.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:718) atorg.apache.flume.sink.hdfs.BucketWriter.runPrivileged(BucketWriter.java:183) atorg.apache.flume.sink.hdfs.BucketWriter.access$1700(BucketWriter.java:59) atorg.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:715) atjava.util.concurrent.FutureTask.run(FutureTask.java:266) atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) atjava.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) atjava.lang.Thread.run(Thread.java:745)

擦,IP写错了,修改一下启动

29 Feb 2016 07:36:59,159 INFO [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Component type: CHANNEL, name: c1 started 29 Feb 2016 07:36:59,160 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173) - Starting Sink k1 29 Feb 2016 07:36:59,160 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184) - Starting Source r1 29 Feb 2016 07:36:59,160 INFO [lifecycleSupervisor-1-4] (org.apache.flume.source.AvroSource.start:220) - Starting Avro source r1: { bindAddress: 0.0.0.0, port: 44446 }... 29 Feb 2016 07:36:59,162 INFO [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitored counter group for type: SINK, name: k1: Successfully registered new MBean. 29 Feb 2016 07:36:59,162 INFO [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Component type: SINK, name: k1 started 29 Feb 2016 07:36:59,362 INFO [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitored counter group for type: SOURCE, name: r1: Successfully registered new MBean. 29 Feb 2016 07:36:59,362 INFO [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Component type: SOURCE, name: r1 started 29 Feb 2016 07:36:59,362 INFO [lifecycleSupervisor-1-4] (org.apache.flume.source.AvroSource.start:245) - Avro source r1 started. 29 Feb 2016 07:37:04,834 INFO [New I/O server boss #1 ([id: 0xad116ca6, /0:0:0:0:0:0:0:0:44446])] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0xb1c37abf, /192.168.89.131:33598 => /192.168.89.93:44446] OPEN 29 Feb 2016 07:37:04,835 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0xb1c37abf, /192.168.89.131:33598 => /192.168.89.93:44446] BOUND: /192.168.89.93:44446 29 Feb 2016 07:37:04,835 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0xb1c37abf, /192.168.89.131:33598 => /192.168.89.93:44446] CONNECTED: /192.168.89.131:33598 29 Feb 2016 07:37:05,063 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSSequenceFile.configure:63) - writeFormat = Writable, UseRawLocalFileSystem = false 29 Feb 2016 07:37:05,099 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creating hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063.tmp 29 Feb 2016 07:37:25,200 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.close:409) - Closing hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063.tmp 29 Feb 2016 07:37:25,200 INFO [hdfs-k1-call-runner-2] (org.apache.flume.sink.hdfs.BucketWriter$3.call:339) - Close tries incremented 29 Feb 2016 07:37:25,224 INFO [hdfs-k1-call-runner-3] (org.apache.flume.sink.hdfs.BucketWriter$8.call:669) - Renaming hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063.tmp to hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063 29 Feb 2016 07:37:25,248 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creating hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064.tmp 29 Feb 2016 07:37:45,160 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.close:409) - Closing hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064.tmp 29 Feb 2016 07:37:45,160 INFO [hdfs-k1-call-runner-6] (org.apache.flume.sink.hdfs.BucketWriter$3.call:339) - Close tries incremented 29 Feb 2016 07:37:45,172 INFO [hdfs-k1-call-runner-7] (org.apache.flume.sink.hdfs.BucketWriter$8.call:669) - Renaming hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064.tmp to hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064 29 Feb 2016 07:37:45,204 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creating hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065.tmp 29 Feb 2016 07:38:05,192 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.close:409) - Closing hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065.tmp 29 Feb 2016 07:38:05,192 INFO [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter$3.call:339) - Close tries incremented 29 Feb 2016 07:38:05,198 INFO [hdfs-k1-call-runner-1] (org.apache.flume.sink.hdfs.BucketWriter$8.call:669) - Renaming hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065.tmp to hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065 29 Feb 2016 07:38:05,209 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creating hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425066.tmp 29 Feb 2016 07:36:59,159 INFO [lifecycleSupervisor-1-0] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Componenttype: CHANNEL, name: c1started 29 Feb 2016 07:36:59,160 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:173) - StartingSinkk1 29 Feb 2016 07:36:59,160 INFO [conf-file-poller-0] (org.apache.flume.node.Application.startAllComponents:184) - StartingSourcer1 29 Feb 2016 07:36:59,160 INFO [lifecycleSupervisor-1-4] (org.apache.flume.source.AvroSource.start:220) - StartingAvrosourcer1: { bindAddress: 0.0.0.0, port: 44446 }... 29 Feb 2016 07:36:59,162 INFO [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitoredcountergroupfor type: SINK, name: k1: Successfullyregisterednew MBean. 29 Feb 2016 07:36:59,162 INFO [lifecycleSupervisor-1-1] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Componenttype: SINK, name: k1started 29 Feb 2016 07:36:59,362 INFO [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.register:119) - Monitoredcountergroupfor type: SOURCE, name: r1: Successfullyregisterednew MBean. 29 Feb 2016 07:36:59,362 INFO [lifecycleSupervisor-1-4] (org.apache.flume.instrumentation.MonitoredCounterGroup.start:95) - Componenttype: SOURCE, name: r1started 29 Feb 2016 07:36:59,362 INFO [lifecycleSupervisor-1-4] (org.apache.flume.source.AvroSource.start:245) - Avrosourcer1started. 29 Feb 2016 07:37:04,834 INFO [New I/O serverboss #1 ([id: 0xad116ca6, /0:0:0:0:0:0:0:0:44446])] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0xb1c37abf, /192.168.89.131:33598 => /192.168.89.93:44446] OPEN 29 Feb 2016 07:37:04,835 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0xb1c37abf, /192.168.89.131:33598 => /192.168.89.93:44446] BOUND: /192.168.89.93:44446 29 Feb 2016 07:37:04,835 INFO [New I/O worker #1] (org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream:171) - [id: 0xb1c37abf, /192.168.89.131:33598 => /192.168.89.93:44446] CONNECTED: /192.168.89.131:33598 29 Feb 2016 07:37:05,063 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.HDFSSequenceFile.configure:63) - writeFormat = Writable, UseRawLocalFileSystem = false 29 Feb 2016 07:37:05,099 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creatinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063.tmp 29 Feb 2016 07:37:25,200 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.close:409) - Closinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063.tmp 29 Feb 2016 07:37:25,200 INFO [hdfs-k1-call-runner-2] (org.apache.flume.sink.hdfs.BucketWriter$3.call:339) - Closetriesincremented 29 Feb 2016 07:37:25,224 INFO [hdfs-k1-call-runner-3] (org.apache.flume.sink.hdfs.BucketWriter$8.call:669) - Renaminghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063.tmp to hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425063 29 Feb 2016 07:37:25,248 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creatinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064.tmp 29 Feb 2016 07:37:45,160 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.close:409) - Closinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064.tmp 29 Feb 2016 07:37:45,160 INFO [hdfs-k1-call-runner-6] (org.apache.flume.sink.hdfs.BucketWriter$3.call:339) - Closetriesincremented 29 Feb 2016 07:37:45,172 INFO [hdfs-k1-call-runner-7] (org.apache.flume.sink.hdfs.BucketWriter$8.call:669) - Renaminghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064.tmp to hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425064 29 Feb 2016 07:37:45,204 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creatinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065.tmp 29 Feb 2016 07:38:05,192 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.close:409) - Closinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065.tmp 29 Feb 2016 07:38:05,192 INFO [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter$3.call:339) - Closetriesincremented 29 Feb 2016 07:38:05,198 INFO [hdfs-k1-call-runner-1] (org.apache.flume.sink.hdfs.BucketWriter$8.call:669) - Renaminghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065.tmp to hdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425065 29 Feb 2016 07:38:05,209 INFO [SinkRunner-PollingRunner-DefaultSinkProcessor] (org.apache.flume.sink.hdfs.BucketWriter.open:261) - Creatinghdfs://192.168.89.92:9000/user/flume/syslogtcp/Syslog.1456749425066.tmp

嗯,这下不报错了,正常了.

总结一下,安装完flume之后,需要从hadoop中拷贝过来的jar包有以下

commons-configuration-1.6.jar hadoop-auth-2.5.2.jar hadoop-common-2.5.2.jar hadoop-hdfs-2.5.2.jar hadoop-mapreduce-client-core-2.5.2.jar commons-configuration-1.6.jar hadoop-auth-2.5.2.jar hadoop-common-2.5.2.jar hadoop-hdfs-2.5.2.jar hadoop-mapreduce-client-core-2.5.2.jar

;