一,安装Scala
下载 :http://www.scala-lang.org/download/
配置环境变量的方式同Java,为了方便全部写在一起放入/etc/profile.d目录
hadoop.sh
#set Java Enviroment
export JAVA_HOME=/usr/java/jdk1.6.0_45
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
#set Scala Enviroment
export SCALA_HOME=/usr/scala/scala-2.10.4
export PATH=$SCALA_HOME/bin:$PATH
#set hadoop path
export HADOOP_HOME=/usr/local/hadoop
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/libexec:$PATH
export HADOOP_HOME_WARN_SUPPRESS=1
# set hbase path
export HBASE_HOME=/usr/local/hbase
export PATH=$HBASE_HOME/bin:$PATH
# set hive path
export HIVE_HOME=/usr/local/hive
export PATH=$HIVE_HOME/bin:$HIVE_HOME/conf:$PATH
# set mahout path
export MAHOUT_HOME=/usr/local/mahout
export MAHOUT_CONF_DIR=$MAHOUT_HOME/conf
export PATH=$MAHOUT_CONF_DIR:$MAHOUT_HOME/bin:$PATH
#set pig path
export PIG_HOME=/usr/local/pig
export PATH=$PIG_HOME/bin:$PIG_HOME/conf:$PATH
export PIG_CLASSPATH=$HADOOP_HOME/conf
#set ant path
export ANT_HOME=/usr/local/apache-ant-1.8.4
export PATH=$ANT_HOME/bin:$PATH
# set maven path
export M2_HOME=/usr/local/apache-maven-3.1.1
export PATH=$M2_HOME/bin:$PATH
#set zookeeper path
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf:$PATH
#set dog path
export DOG_HOME=/usr/local/dog
export PATH=$DOG_HOME/bin:$PATH
二,安装Hadoop2
http://yeelor.iteye.com/blog/2002623
三,安装Spark
下载 http://spark.apache.org/downloads.html 一个提前编译过的版本,注意scala版本是否支持。
编辑conf/spark-env.sh文件,加入:
export SCALA_HOME=/usr/scala/scala-2.10.4
export SPARK_WORKER_MEMORY=24g
export SPARK_MASTER_IP=218.193.154.216
export MASTER=spark://218.193.154.216:7077
编辑conf/slaves文件,加入
slave1
slave2
将spark文件夹拷贝到每台机器
1.sh
Shell代码
#for i in {1..10}; do
for host in {master,slave1,slave2}; do
#for((i=1;i<=num;i++)); do
echo "开始安装${host}..."
echo "拷贝几个配置文件"
scp -r /usr/local/spark root@${host}:/usr/local
scp -r /usr/scala root@${host}:/usr/
scp /etc/profile.d/hadoop.sh root@${host}:/etc/profile.d
scp 2.sh root@${host}:/tmp/2.sh
ssh root@${host} sh /tmp/2.sh
echo "安装${host}完毕"
done
2.sh
chown -R hadoop:hadoop /usr/local/spark
echo "使环境变量生效"
source /etc/profile
exit
四,启动集群
在spark根目录
启动:
./sbin/start-all.sh
关闭:
./sbin/stop-all.sh
用 jps 命令查看进程,Master上有Master进程,Slave上有Worker进程。
五,测试
http://hmaster:8080/
1)
./bin/run-example org.apache.spark.examples.SparkPi
2)
./bin/spark-shell
六,参考
图书:《spark大数据处理》
评论专区