spark-shell error : No FileSystem for scheme: wasb

roy picture roy · Jul 7, 2016 · Viewed 10k times · Source

We have HDInsight cluster in Azure running, but it doesn't allow to spin up edge/gateway node at the time of cluster creation. So I was creating this edge/gateway node by installing

echo 'deb http://private-repo-1.hortonworks.com/HDP/ubuntu14/2.x/updates/2.4.2.0 HDP main' >> /etc/apt/sources.list.d/HDP.list
echo 'deb http://private-repo-1.hortonworks.com/HDP-UTILS-1.1.0.20/repos/ubuntu14 HDP-UTILS main'  >> /etc/apt/sources.list.d/HDP.list
echo 'deb [arch=amd64] https://apt-mo.trafficmanager.net/repos/azurecore/ trusty main' >> /etc/apt/sources.list.d/azure-public-trusty.list
gpg --keyserver pgp.mit.edu --recv-keys B9733A7A07513CAD
gpg -a --export 07513CAD | apt-key add -
gpg --keyserver pgp.mit.edu --recv-keys B02C46DF417A0893
gpg -a --export 417A0893 | apt-key add -
apt-get -y install openjdk-7-jdk
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64
apt-get -y install hadoop hadoop-hdfs hadoop-yarn hadoop-mapreduce hadoop-client openssl libhdfs0 liblzo2-2 liblzo2-dev hadoop-lzo phoenix hive hive-hcatalog tez mysql-connector-java* oozie oozie-client sqoop flume flume-agent spark spark-python spark-worker spark-yarn-shuffle

Then I copied /usr/lib/python2.7/dist-packages/hdinsight_common/ /usr/share/java/ /usr/lib/hdinsight-datalake/ /etc/spark/conf/ /etc/hadoop/conf/

But when I run spark-shell I get following error

java.io.IOException: No FileSystem for scheme: wasb

Here is the full stack https://gist.github.com/anonymous/ebb6c9d71865c9c8e125aadbbdd6a5bc

I am not sure which package/jar is missing here.

Anyone has any clue what I am doing wrong ?

Thanks

Answer

NicolásKittsteiner picture NicolásKittsteiner · Jan 9, 2017

Another way of setting Azure Storage (wasb and wasbs files) in spark-shell is:

  1. Copy azure-storage and hadoop-azure jars in the ./jars directory of spark installation.
  2. Run the spark-shell with the parameters —jars [a comma separated list with routes to those jars] Example:

    
    $ bin/spark-shell --master "local[*]" --jars jars/hadoop-azure-2.7.0.jar,jars/azure-storage-2.0.0.jar
    
  3. Add the following lines to the Spark Context:

    
    sc.hadoopConfiguration.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem")
    sc.hadoopConfiguration.set("fs.azure.account.key.my_account.blob.core.windows.net", "my_key")
    
  4. Run a simple query:

    
    sc.textFile("wasb://my_container@my_account_host/myfile.txt").count()
    
  5. Enjoy :)

With this settings you could easily could setup a Spark application, passing the parameters to the 'hadoopConfiguration' on the current Spark Context