1,将/letv/data/apache-hive-0.13.1-bin/conf/hive-site.xml,
/letv/data/apache-hive-0.13.1-bin/conf/ hive-log4j.properties.template
配置文件拷贝到$SPARK_HOME/conf目录下;
cp / letv / data / apache - hive - 0.13.1 - bin / conf / hive - site.xml / letv / data / spark - 1.5.0 - bin - hadoop2.6 / conf
将hive-site.xml中的jdbc的地址修改为有mysql的服务器地址
2,在/letv/data/spark-1.5.0-bin-hadoop2.6/conf/spark-env.sh下配置spark classpath ,添加mysql驱动类(在这之前要将mysql的jar放在lib目录下)
export SPARK_CLASSPATH = $SPARK_LOCAL_DIRS / lib / mysql - connector - java - 5.1.18.jar
3,启动hive metastore
$nohup hive--service metastore > metastore.log 2 > &1 &
4,修改/tmp/hive-root权限
hadoop fs - chmod - R + w / tmp / hive - root
5,启动 spark-sql cli
bin / spark - sql--master spark: //10.185.28.92:7077 yarn-client
6,关闭info
log4j.properties.template下修改log4j.rootCategory=WARN, console
参考:
java.lang.IllegalArgumentException: java.net.UnknownHostException: ns1
需要在spark-env.sh 中配置
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
是为了使用hdfs
来源: