2016-10-25 5 views
0

ローカルマシン上でhbaseの読み書きをテストしようとしています。スパークHBase接続タイムアウト/ハング

私はHBaseの/ Hadoopの/飼育係などをホストするためにClouderaのクイックスタート]ドッキングウィンドウの画像を使用してい

私は、次のコードを持っている:

 val conf = new SparkConf().setAppName("Database Benchmark") 

     val sparkContext = new SparkContext(conf) 
     val tableName = "testTable" 
     val HBaseConf = HBaseConfiguration.create() 
     // Add local HBase conf 
     HBaseConf.set("hbase.master", "localhost") 
     HBaseConf.set("hbase.zookeeper.quorum","localhost") 
     HBaseConf.set("hbase.zookeeper.property.clientPort", "2181") 
     HBaseConf.set(TableInputFormat.INPUT_TABLE, tableName) 

     val connection = ConnectionFactory.createConnection(HBaseConf) 
     val table = connection.getTable(TableName.valueOf(tableName)) 

     val rdd = sparkContext.parallelize(1 to 100) 
      .map(i => (i.toString, i+1)) 

     try { 
      read(table) 

      write(rdd) 
     } catch { 
      case e : Exception => println("uh oh") // uh oh. 
     } finally { 
      table.close 
      connection.close 
     } 


    } 

    def write(toWrite : RDD[(String, Int)]): Put = { 
     val putter = new Put(Bytes.toBytes("Row2")) 
     putter.addColumn(Bytes.toBytes("test"), 
      Bytes.toBytes("column1"), Bytes.toBytes(toWrite.toString())) 
    } 

    def read(table : Table) = { 
     val row = table.get(new Get(Bytes.toBytes("newRow"))) 
     println(row.toString) 
    } 

を現在私のコードは本当に多くをしません、私は、読み取りを取得し、仕事への書き込みをしようとしているが、私はコンテナに接続しようとしたとき、私は無期限に次のようにハングアップメッセージが表示されます:

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 
16/10/25 14:56:00 INFO SparkContext: Running Spark version 2.0.1 
16/10/25 14:56:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
16/10/25 14:56:01 INFO SecurityManager: Changing view acls to: {user} 
16/10/25 14:56:01 INFO SecurityManager: Changing modify acls to: {user} 
16/10/25 14:56:01 INFO SecurityManager: Changing view acls groups to: 
16/10/25 14:56:01 INFO SecurityManager: Changing modify acls groups to: 
16/10/25 14:56:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set({user}); groups with view permissions: Set(); users with modify permissions: Set({user}); groups with modify permissions: Set() 
16/10/25 14:56:02 INFO Utils: Successfully started service 'sparkDriver' on port 57682. 
16/10/25 14:56:02 INFO SparkEnv: Registering MapOutputTracker 
16/10/25 14:56:02 INFO SparkEnv: Registering BlockManagerMaster 
16/10/25 14:56:02 INFO DiskBlockManager: Created local directory at /private/var/folders/b5/6bhlwry949n3mpwppt4m5_1jcmsf2g/T/blockmgr-d4aaf6d1-ca0d-4c9b-9e08-9b7716e89791 
16/10/25 14:56:02 INFO MemoryStore: MemoryStore started with capacity 2004.6 MB 
16/10/25 14:56:02 INFO SparkEnv: Registering OutputCommitCoordinator 
16/10/25 14:56:02 INFO Utils: Successfully started service 'SparkUI' on port 4040. 
16/10/25 14:56:02 INFO SparkUI: Bound SparkUI to 127.0.0.1, and started at http://127.0.0.1:4040 
16/10/25 14:56:02 INFO Executor: Starting executor ID driver on host localhost 
16/10/25 14:56:02 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 57683. 
16/10/25 14:56:02 INFO NettyBlockTransferService: Server created on 127.0.0.1:57683 
16/10/25 14:56:02 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 127.0.0.1, 57683) 
16/10/25 14:56:02 INFO BlockManagerMasterEndpoint: Registering block manager 127.0.0.1:57683 with 2004.6 MB RAM, BlockManagerId(driver, 127.0.0.1, 57683) 
16/10/25 14:56:02 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 127.0.0.1, 57683) 
16/10/25 14:56:02 INFO RecoverableZooKeeper: Process identifier=hconnection-0x5bd73d1a connecting to ZooKeeper ensemble=localhost:2181 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:host.name=10.171.46.220 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:java.version=1.8.0_101 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:java.vendor=Oracle Corporation 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:java.library.path=/Users/{user}/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:java.io.tmpdir=/var/folders/b5/6bhlwry949n3mpwppt4m5_1jcmsf2g/T/ 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:java.compiler=<NA> 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:os.name=Mac OS X 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:os.arch=x86_64 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:os.version=10.11.6 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:user.name={user} 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:user.home=/Users/{user} 
16/10/25 14:56:02 INFO ZooKeeper: Client environment:user.dir=/Users/{user}/code/scala/HBaseTest 
16/10/25 14:56:02 INFO ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x5bd73d1a0x0, quorum=localhost:2181, baseZNode=/hbase 
16/10/25 14:56:02 INFO ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 
16/10/25 14:56:02 INFO ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session 
16/10/25 14:56:02 INFO ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x157fce, negotiated timeout = 40000 
16/10/25 14:56:41 INFO RpcRetryingCaller: Call exception, tries=10, retries=35, started=38307 ms ago, cancelled=false, msg=row 'testTable,newRow,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=quickstart.cloudera,60020,1477401812367, seqNum=0 
16/10/25 14:56:51 INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=48337 ms ago, cancelled=false, msg=row 'testTable,newRow,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=quickstart.cloudera,60020,1477401812367, seqNum=0 
... 
16/10/25 15:00:56 INFO RpcRetryingCaller: Call exception, tries=11, retries=35, started=48468 ms ago, cancelled=false, msg=row 'testTable,newRow,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=quickstart.cloudera,60020,1477401812367, seqNum=0 
16/10/25 15:01:08 INFO SparkContext: Invoking stop() from shutdown hook 
16/10/25 15:01:08 INFO SparkUI: Stopped Spark web UI at http://127.0.0.1:4040 
16/10/25 15:01:08 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! 
16/10/25 15:01:08 INFO MemoryStore: MemoryStore cleared 
16/10/25 15:01:08 INFO BlockManager: BlockManager stopped 
16/10/25 15:01:08 INFO BlockManagerMaster: BlockManagerMaster stopped 
16/10/25 15:01:08 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! 
16/10/25 15:01:08 INFO SparkContext: Successfully stopped SparkContext 
16/10/25 15:01:08 INFO ShutdownHookManager: Shutdown hook called 
16/10/25 15:01:08 INFO ShutdownHookManager: Deleting directory /private/var/folders/b5/6bhlwry949n3mpwppt4m5_1jcmsf2g/T/spark-cc1a87f4-ae09-489e-8957-c2c8a3788e9b 


Process finished with exit code 130 (interrupted by signal 2: SIGINT) 

私はそれがだとは思いませんドッキングウィンドウコンテナのポートは、私は次のように必要なポートを転送するために:問題は、おそらく何ができるか

<configuration> 
    <property> 
    <name>hbase.rest.port</name> 
    <value>8070</value> 
    <description>The port for the HBase REST server.</description> 
    </property> 

    <property> 
    <name>hbase.cluster.distributed</name> 
    <value>true</value> 
    </property> 

    <property> 
    <name>hbase.rootdir</name> 
    <value>hdfs://quickstart.cloudera:8020/hbase</value> 
    </property> 

    <property> 
    <name>hbase.regionserver.ipc.address</name> 
    <value>0.0.0.0</value> 
    </property> 

    <property> 
    <name>hbase.master.ipc.address</name> 
    <value>0.0.0.0</value> 
    </property> 

    <property> 
    <name>hbase.thrift.info.bindAddress</name> 
    <value>0.0.0.0</value> 
    </property> 

</configuration> 

任意のアイデアを次のように

docker run --hostname=quickstart.cloudera --privileged=true -t -i -p 8888:8888 -p 7180:7180 -p 8000:80 -p 50070:50070 -p 8020:8020 -p 7077 -p 60000:60000 -p 60020:60020 -p 2181:2181 cloudera/quickstart /usr/bin/docker-quickstart 

のHBase-site.xmlのですか?

HBaseの-site.xmlの中
+0

あなたは糸からの完全なログが糸のログと私はアプリを実行しているコマンドラインパラメータに差がある –

+0

@NirmalRamをAPPLICATIONID投稿することができますか? –

+0

根本原因を指摘している可能性があります –

答えて

0

問題が見つかりました。

私のコンテナのホスト名が/ etc/hostsファイルにありませんでした。

127.0.0.1 quickstart.cloudera 
0

1)変更

HBaseConf.set("hbase.master", "localhost:60000") 

2)、

<property> 
     <name>zookeeper.znode.parent</name> 
     <value>/hbase-unsecure</value> 
    </property> 
<property> 

3を追加)あなたbuild.sbtファイルでのHBaseクライアントhttps://mvnrepository.com/artifact/org.apache.hbase/hbase-clientを持っていますか?

+0

試してみました。変化なし。番号2はhbase:meta ...を 'at null'に置き換えます。 3番は既にビルドに含まれていました。 –