Hadoop: Il y a 0 datanode(s) en cours d'exécution et pas de nœud(s) sont exclus de cette opération

J'ai déployé cluster Hadoop sur VMware. Ils sont tous sur CentOS 7.

Problème de commande jps sur le Master:

[root@hadoopmaster anna]# jps
6225 NameNode
6995 ResourceManager
6580 SecondaryNameNode
7254 Jps

Problème de commande jps sur l'Esclave:

[root@hadoopslave1 anna]# jps
5066 DataNode
5818 Jps
5503 NodeManager

Cependant, je n'ai aucune idée pourquoi le live nœuds sur http://localhost:50070/dfshealth.html#tab-overview
montre 0. Et je ne peux pas question de hdfs dfs -mis en/fichier/f1. Il affiche le message d'erreur:

[root@hadoopmaster hadoop]# hdfs dfs -put in/file/f1 /user
16/01/06 02:53:14 WARN hdfs.DFSClient: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1550)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3110)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3034)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:723)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:492)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

    at org.apache.hadoop.ipc.Client.call(Client.java:1476)
    at org.apache.hadoop.ipc.Client.call(Client.java:1407)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
    at com.sun.proxy.$Proxy9.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:418)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1430)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1226)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)
put: File /user._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

J'ai essayé d'autres postes comme

rm -R /tmp/*

et vérifier ssh

sur le master:

[root@hadoopmaster hadoop]# ssh hadoopmaster
Last login: Wed Jan  6 02:56:27 2016 from hadoopslave1
[root@hadoopmaster ~]# exit
logout
Connection to hadoopmaster closed.
[root@hadoopmaster hadoop]# ssh hadoopslave1
Last login: Wed Jan  6 02:43:21 2016
[root@hadoopslave1 ~]# exit
logout
Connection to hadoopslave1 closed.
[root@hadoopmaster hadoop]#

sur l'esclave:

[root@hadoopslave1 .ssh]# ssh hadoopmaster
Last login: Wed Jan  6 03:04:45 2016 from hadoopmaster
[root@hadoopmaster ~]# exit
logout
Connection to hadoopmaster closed.
[root@hadoopslave1 .ssh]# ssh hadoopslave1
Last login: Wed Jan  6 03:04:40 2016 from hadoopmaster
[root@hadoopslave1 ~]# exit
logout
Connection to hadoopslave1 closed.
[root@hadoopslave1 .ssh]# 

OriginalL'auteur Anna Chen | 2016-01-06