博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Docker 搭建Spark 依赖singularities/spark:2.2镜像
阅读量:4450 次
发布时间:2019-06-07

本文共 4644 字,大约阅读时间需要 15 分钟。

singularities/spark:2.2版本中

Hadoop版本:2.8.2

Spark版本: 2.2.1

Scala版本:2.11.8

Java版本:1.8.0_151

拉取镜像:

[root@localhost docker-spark-2.1.0]# docker pull singularities/spark

查看:

[root@localhost docker-spark-2.1.0]# docker image lsREPOSITORY                      TAG                 IMAGE ID            CREATED             SIZEdocker.io/singularities/spark   latest              84222b254621        6 months ago        1.39 GB

创建docker-compose.yml文件

[root@localhost home]# mkdir singularitiesCR[root@localhost home]# cd singularitiesCR[root@localhost singularitiesCR]# touch docker-compose.yml

内容:

version: "2"services:  master:    image: singularities/spark    command: start-spark master    hostname: master    ports:      - "6066:6066"      - "7070:7070"      - "8080:8080"      - "50070:50070"  worker:    image: singularities/spark    command: start-spark worker master    environment:      SPARK_WORKER_CORES: 1      SPARK_WORKER_MEMORY: 2g    links:      - master

执行docker-compose up即可启动一个单工作节点的standlone模式下运行的spark集群

[root@localhost singularitiesCR]# docker-compose up -dCreating singularitiescr_master_1 ... doneCreating singularitiescr_worker_1 ... done

查看容器:

[root@localhost singularitiesCR]# docker-compose ps          Name                      Command            State                                             Ports                                          --------------------------------------------------------------------------------------------------------------------------------------------------------singularitiescr_master_1   start-spark master          Up      10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp,                                                                                       0.0.0.0:50070->50070/tcp, 50075/tcp, 50090/tcp, 50470/tcp, 50475/tcp,                                                                                   0.0.0.0:6066->6066/tcp, 0.0.0.0:7070->7070/tcp, 7077/tcp, 8020/tcp,                                                                                     0.0.0.0:8080->8080/tcp, 8081/tcp, 9000/tcp                                               singularitiescr_worker_1   start-spark worker master   Up      10020/tcp, 13562/tcp, 14000/tcp, 19888/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp,                                                                 50090/tcp, 50470/tcp, 50475/tcp, 6066/tcp, 7077/tcp, 8020/tcp, 8080/tcp, 8081/tcp,                                                                      9000/tcp

查看结果:

停止容器:

[root@localhost singularitiesCR]# docker-compose stopStopping singularitiescr_worker_1 ... doneStopping singularitiescr_master_1 ... done[root@localhost singularitiesCR]# docker-compose ps          Name                      Command             State     Ports-----------------------------------------------------------------------singularitiescr_master_1   start-spark master          Exit 137        singularitiescr_worker_1   start-spark worker master   Exit 137

删除容器:

[root@localhost singularitiesCR]# docker-compose rmGoing to remove singularitiescr_worker_1, singularitiescr_master_1Are you sure? [yN] yRemoving singularitiescr_worker_1 ... doneRemoving singularitiescr_master_1 ... done[root@localhost singularitiesCR]# docker-compose psName   Command   State   Ports------------------------------

进入master容器查看版本:

[root@localhost singularitiesCR]# docker exec -it 497 /bin/bashroot@master:/# hadoop versionHadoop 2.8.2Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fbCompiled by jdu on 2017-10-19T20:39ZCompiled with protoc 2.5.0From source with checksum dce55e5afe30c210816b39b631a53b1dThis command was run using /usr/local/hadoop-2.8.2/share/hadoop/common/hadoop-common-2.8.2.jarroot@master:/# which is hadoop/usr/local/hadoop-2.8.2/bin/hadooproot@master:/# spark-shellSetting default log level to "WARN".To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).18/08/14 09:20:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicableSpark context Web UI available at http://172.18.0.2:4040Spark context available as 'sc' (master = local[*], app id = local-1534238447256).Spark session available as 'spark'.Welcome to      ____              __     / __/__  ___ _____/ /__    _\ \/ _ \/ _ `/ __/  '_/   /___/ .__/\_,_/_/ /_/\_\   version 2.2.1      /_/         Using Scala version 2.11.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_151)Type in expressions to have them evaluated.Type :help for more information.

 

参考:

转载于:https://www.cnblogs.com/hongdada/p/9475406.html

你可能感兴趣的文章
MySQL+Altas 读写分离测试(Altas 不能用存储过程,Update和Delete必须要有参数)
查看>>
Spring声明式事务管理基于tx/aop命名空间
查看>>
元素float以后,div高度无法自适应解决方案
查看>>
redis持久化 RDB和AOF
查看>>
回到顶部按钮
查看>>
HTML5的新结构标签
查看>>
非windows下 php连接mssql FreeTDS配置
查看>>
面试技术资料收藏
查看>>
RedHat Enterprise Linux 5下安装JDK
查看>>
合理的代码覆盖率
查看>>
【转】2014区域赛小结(牡丹江&&鞍山)by kuangbin
查看>>
光标跟随
查看>>
2/24笔记
查看>>
纯真的间谍
查看>>
paramiko上传下载
查看>>
【转】C#各个版本中的新增特性详解
查看>>
localstorage 使用
查看>>
[期望]洛谷 P3802 小魔女帕琪
查看>>
Linux虚拟内存
查看>>
cf1009 C. Annoying Present
查看>>