以下操作基于Centos7
1.目录
docker安装
mysql安装
redis安装
minio安装
rabbitMQ安装
ElasticSearch安装
hadoop-spark安装
zookeepker、kafka安装
1.docker安装 centos 7 安装:
在新主机上首次安装Docker CE之前,需要设置Docker存储库。之后,您可以从存储库安装和更新Docker。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 1 . 安装所需的包sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2 2 . 设置稳定存储库sudo yum-config-manager \ --add -repo \ https://download.docker.com/linux/centos/docker-ce.repo 3 . 安装最新版本的Docker CE sudo yum install docker-ce -y 4 .启动Dockersudo systemctl start docker 5 .通过运行hello-world 映像验证是否已正确安装。sudo docker run hello-world 补充:安装 Docker-Compose curl -L https://github.com/docker/compose/releases/download/1.23 .2 /docker-compose-`uname -s`-`uname -m` -o /usr/bin/docker-compose chmod +x /usr/bin/docker-compose
2.mysql5.7安装 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 1 .拉取MySQL镜像docker pull mysql:5.7 2 .创建目录(这两个用于保存配置和数据)mkdir -p /data/docker/mysql/conf.d mkdir -p /data/docker/mysql/data 3 .创建MySQL容器docker run -itd -p 3306:3306 --restart=always -v /data/docker/mysql/conf.d:/etc/mysql/conf.d -v /data/docker/mysql/data:/var/lib/mysql -v /data/docker/mysql/my.cnf:/etc/my.cnf -e MYSQL_ROOT_PASSWORD='pass=root' --name mysql mysql:5.7 -v 映射目录,将容器内的配置与数据文件夹,映射到宿主机目录 -p 代表端口映射,格式为 宿主机映射端口:容器运行端口 -e 代表添加环境变量 MYSQL_ROOT_PASSWORD是root用户的登陆密码 4 .进入MySQL容器 docker exec -it mysql /bin/bash 5 .登陆mysqlmysql -u root -p 输入密码,即可进入mysql,证明安装成功。 意外情况处理:mysql设置远程登录 1 . USE mysql; -- 切换到 mysql DB 2 . SELECT User , Host FROM user ; -- 查看现有用户及允许连接的主机 3 . GRANT ALL PRIVILEGES ON *.* TO 'root' @'%' IDENTIFIED BY 'Xyq_pass=kid1999' WITH GRANT OPTION; 4 . flush privileges;
3.redis安装 1 2 3 4 5 6 7 8 9 10 11 12 1 . 下载docker pull redis:5.0 2 . 以配置文件启动redisdocker run -itd --name redis --restart=always -p 6379:6379 -v /data/docker/redis:/data -v /data/docker/redis/redis.conf:/etc/redis/redis.conf redis:5.0 redis-server /etc/redis/redis.conf 注:redis.conf 文件信息见 redis章节 3 . 連接,查看容器docker exec -it redis-server redis-cli --raw
4.Minio安装
使用https://docs.minio.io/cn/deploy-minio-on-docker-compose.html
1 2 3 4 5 6 7 8 9 10 11 12 1 . 下载docker pull minio/minio 2 . 執行容器docker run -di -p 9000:9000 \ --name minio1 \ -v /mnt/data:/data \ -e "MINIO_ACCESS_KEY=root" \ -e "MINIO_SECRET_KEY='pass=root'" \ --restart=always \ minio/minio server /data
5. rabbitMQ安装 1 2 3 4 5 6 1 . 下载docker pull rabbitmq:3 -management 2 . 執行容器docker run -dit --name rabbitmq --restart=always -e RABBITMQ_DEFAULT_USER=root -e RABBITMQ_DEFAULT_PASS='pass=root' -p 15672:15672 -p 5672:5672 rabbitmq:3-management
6. ElasticSearch安装 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 1 . 下载docker pull elasticsearch:7.8 .0 2 . 執行容器docker run -p 9200:9200 -p 9300:9300 --name es7.8 \ --restart=always \ -e "discovery.type=single-node" \ -e ES_JAVA_OPTS="-Xms256m -Xmx256m" \ -v/data/docker/es/plugins:/usr/share/elasticsearch/plugins \ -v /data/docker/es/data:/usr/share/elasticsearch/data \ -v/data/docker/es/logs:/usr/share/elasticsearch/logs \ -v /data/docker/es/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml \ -d elasticsearch:7.8.0 3 .修改映射文件夹quanxchmod -R 775 /data/es/data chmod -R 775 /data/es/logs 4 .安装kibana-------------------------------docker pull kibana:7.8 .0 docker run -d \ --name=kibana \ --restart=always \ -p 5601:5601 \ -v /data/docker/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml \ kibana:7.8.0 ------------------------------------------
7. hadoop-spark安装
记得修改 -v映射地址 ,方便放置编写的spark程序启动!!!1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 . 下载 master和workerdocker pull bde2020/spark-master:2.4 .5 -hadoop2.7 docker pull bde2020/spark-worker:2.4 .5 -hadoop2.7 2 . 執行容器docker run --name spark-master -h spark-master -e ENABLE_INIT_DAEMON=false -d -p 8080:8080 -p7077:7077 -v %cd %:/app bde2020/spark-master:2.4.5-hadoop2.7 docker run --name spark-worker-1 --link spark-master:spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-worker:2.4.5-hadoop2.7 3 .localhost:8080 查看端口启动情况4 .进入spark运行程序docker exec -it spark-master /bin/bash /spark/bin/spark-submit --class work.ProductRecommendationByALS --master local /app/spark-work-1.2 .jar
8. kafka安装 1.docker安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 docker pull wurstmeister/zookeeper docker pull wurstmeister/kafka docker run -d --name zookeeper -p 2181:2181 -t wurstmeister/zookeeper docker run -d --name kafka -p 9092:9092 \ -e KAFKA_BROKER_ID=0 \ -e KAFKA_ZOOKEEPER_CONNECT=159.75.6.26:2181 \ -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://159.75.6.26:9092 \ -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 \ -t wurstmeister/kafka 这里面主要设置了4 个参数 KAFKA_BROKER_ID=0 KAFKA_ZOOKEEPER_CONNECT=159.75 .6.26 :2181 KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://159.75 .6.26 :9092 KAFKA_LISTENERS=PLAINTEXT://0.0 .0.0 :9092 中间两个参数的192.168 .204.128 改为宿主机器的IP地址,如果不这么设置,可能会导致在别的机器上访问不到kafka docker run -itd --rm --link zookeeper:zookeeper \ --link kafka:kafka \ -p 9001:9000 \ -e ZK_HOSTS=zookeeper:2181 \ dockerkafka/kafka-manager
2.docker-compose 安装
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 version: "3.3" services: zookeeper: image: zookeeper:3.5 .5 restart: always container_name: zookeeper ports: - "2181:2181" expose : - "2181" environment: - ZOO_MY_ID=1 kafka: image: wurstmeister/kafka restart: always container_name: kafka environment: - KAFKA_BROKER_ID=1 - KAFKA_LISTENERS=PLAINTEXT://kafka:9090 - KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 - KAFKA_MESSAGE_MAX_BYTES=2000000 ports: - "9090:9090" depends_on: - zookeeper kafka-manager: image: sheepkiller/kafka-manager environment: ZK_HOSTS: zookeeper:2181 ports: - "9000:9000" docker-compose up -d