core-site.xmlとhdfs-site.xmlに関するHadoopの設定が非常に混乱しています。私はstart-dfs.shスクリプトが実際にこの設定を使用していないと感じています。私はhdfsユーザを使ってネームノードを正常にフォーマットするが、start-dfs.shを実行するとhdfsデーモンを起動できない。誰も私を助けることができます!ここでは、エラーメッセージは次のとおりです。start-dfs.shを実行するとhdfsデーモンが起動しません
hostname: I26C
IP:10.1.226.15
スレーブ:
hostname:I26D
IP:10.1.226.16
Hadoopのバージョン:2.7.2
[[email protected] ~]$ start-dfs.sh
Starting namenodes on [I26C]
I26C: mkdir: cannot create directory ‘/hdfs’: Permission denied
I26C: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
I26C: starting namenode, logging to /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
I26C: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-namenode-I26C.out’ for reading: No such file or directory
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
I26C: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-namenode-I26C.out: No such file or directory
10.1.226.15: mkdir: cannot create directory ‘/hdfs’: Permission denied
10.1.226.15: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
10.1.226.15: starting datanode, logging to /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.16: mkdir: cannot create directory ‘/edw/hadoop-2.7.2/logs’: Permission denied
10.1.226.16: chown: cannot access ‘/edw/hadoop-2.7.2/logs’: No such file or directory
10.1.226.16: starting datanode, logging to /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
10.1.226.15: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-datanode-I26C.out’ for reading: No such file or directory
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.15: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-datanode-I26C.out: No such file or directory
10.1.226.16: head: cannot open ‘/edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out’ for reading: No such file or directory
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
10.1.226.16: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /edw/hadoop-2.7.2/logs/hadoop-hdfs-datanode-I26D.out: No such file or directory
Starting secondary namenodes [0.0.0.0]
0.0.0.0: mkdir: cannot create directory ‘/hdfs’: Permission denied
0.0.0.0: chown: cannot access ‘/hdfs/hdfs’: No such file or directory
0.0.0.0: starting secondarynamenode, logging to /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 159: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
0.0.0.0: head: cannot open ‘/hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out’ for reading: No such file or directory
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 177: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
0.0.0.0: /edw/hadoop-2.7.2/sbin/hadoop-daemon.sh: line 178: /hdfs/hdfs/hadoop-hdfs-secondarynamenode-I26C.out: No such file or directory
ここに私の展開についての情報
マスターです
OS:C entOS 7
JAVA:
groupadd hadoop
useradd -g hadoop hadoop
useradd -g hadoop hdfs
useradd -g hadoop mapred
useradd -g hadoop yarn
HDFSの名前ノードとデータノードDIR特権:
drwxrwxr-x. 3 hadoop hadoop 4.0K Apr 26 15:40 hadoop-data
コア-site.xmlの設定:
1.8私は4人のユーザーを作成しています
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/edw/hadoop-data/</value>
<description>Temporary Directory.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.1.226.15:54310</value>
</property>
</configuration>
hdfs-site.xml設定:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///edw/hadoop-data/dfs/namenode</value>
<description>Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
</description>
</property>
<property>
<name>dfs.blocksize</name>
<value>67108864</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///edw/hadoop-data/dfs/datanode</value>
<description>Determines where on the local filesystem an DFS data node should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices. Directories that do not exist are ignored.
</description>
</property>
</configuration>
あなたは確認することができる、エコー$ HADOOP_CONF_DIR:
は、ここに私の新しいソリューションの編集$ HADOOP_LOG_DIRを設定/etc/hadoop/hadoop-env.sh {} HADOOP_HOMEです。また、それらのディレクトリがあらかじめ作成されていることを確認し、hdfsユーザが適切な書き込み許可を持っていることを確認してください。 – neeraj