2017-09-13 5 views
1

スパークでGoogleバケットファイルを読み込もうとしています/ https://cloud.google.com/dataproc/docs/connectors/install-storage-connectorに記載されている必要な設定を行っています。また、結果はスパークでGoogleバケットファイルを読む

hadoop fs -ls gs://directory-name/ 

となります。私は多分、私が発見したすべてのリンクを持っていますが、解決することができませんでした

File "/home/hadoopuser/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1312, in take 
    File "/home/hadoopuser/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 384, in getNumPartitions 
    File "/home/hadoopuser/spark/python/lib/py4j-0.10.4-src.zip/py4j/java_gateway.py", line 1133, in __call__ 
    File "/home/hadoopuser/spark/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value 
py4j.protocol.Py4JJavaError: An error occurred while calling o21.partitions. 
: java.io.IOException: Error getting access token from metadata server at: http://metadata/computeMetadata/v1/instance/service-accounts/default/token 
    at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:207) 
    at com.google.cloud.hadoop.util.CredentialConfiguration.getCredential(CredentialConfiguration.java:70) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.configure(GoogleHadoopFileSystemBase.java:1816) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:1003) 
    at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.initialize(GoogleHadoopFileSystemBase.java:966) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) 
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) 
    at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258) 
    at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) 
    at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315) 
    at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:202) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:252) 
    at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:250) 
    at scala.Option.getOrElse(Option.scala:121) 
    at org.apache.spark.rdd.RDD.partitions(RDD.scala:250) 
    at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61) 
    at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) 
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) 
    at py4j.Gateway.invoke(Gateway.java:280) 
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) 
    at py4j.commands.CallCommand.execute(CallCommand.java:79) 
    at py4j.GatewayConnection.run(GatewayConnection.java:214) 
    at java.lang.Thread.run(Thread.java:748) 
Caused by: java.net.UnknownHostException: metadata 
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184) 
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) 
    at java.net.Socket.connect(Socket.java:589) 
    at sun.net.NetworkClient.doConnect(NetworkClient.java:175) 
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:463) 
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:558) 
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:242) 
    at sun.net.www.http.HttpClient.New(HttpClient.java:339) 
    at sun.net.www.http.HttpClient.New(HttpClient.java:357) 
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1202) 
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138) 
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032) 
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:966) 
    at com.google.api.client.http.javanet.NetHttpRequest.execute(NetHttpRequest.java:93) 
    at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:972) 
    at com.google.cloud.hadoop.util.CredentialFactory$ComputeCredentialWithRetry.executeRefreshToken(CredentialFactory.java:158) 
    at com.google.api.client.auth.oauth2.Credential.refreshToken(Credential.java:489) 
    at com.google.cloud.hadoop.util.CredentialFactory.getCredentialFromMetadataServiceAccount(CredentialFactory.java:205) 
    ... 36 more 

rdd = sc.textFile("gs://directory-name/") 

としてPython /スパークスクリプトから同じディレクトリを読んだとき、私は次のスタックトレースとエラーを取得していますそれ。どんな提案も感謝します。

答えて

0

スパークがあなたのcore-site.xmlを受け取っていないようです。あなたは

  • $SPARK_HOME/confまたは
  • $SPARK_HOME/conf/spark-env.sh中など)を設定しHADOOP_CONF_DIRにコピーすることができます。
関連する問題