闲来生活闲来生活  2023-04-18 01:49 闲闲来来转转 隐藏边栏  15 

构建高可用集群架构需要考虑多方面的因素,其中,要特别重视数据备份和灾备方案的设计,以确保数据的安全和可恢复性。同时,还需要考虑如何实现负载均衡、故障转移和自动化运维等功能,以提高系统的可用性和可靠性。

现在我将搭建高可用集群架构,这些仅为我个人的学习笔记,如有不对的地方欢迎各位大佬指点

高可用集群架构图

构建高可用集群架构,高可用集群搭建

架构环境准备

建高可用集群架构需要考虑多方面的因素,其中,要特别重视数据备份和灾备方案的设计,以确保数据的安全和可恢复性。同时,还需要考虑如何实现负载均衡、故障转移和自动化运维等功能,以提高系统的可用性和可靠性。

服务器环境规划

- 服务器作用            主机名                外网地址            内网地址                运行软件                                    
  管理机               master-61            10.0.0.61         172.16.1.61            Ansible/zabbix/jumpserver/openvpn        
  负载均衡服务器        slb-5                10.0.0.5          172.16.1.5             nginx/keepalived
  负载均衡服务器        slb-6                10.0.0.6          172.16.1.6              nginx/keepalived
- web服务器            web-7                10.0.0.7          172.16.1.7              nginx/php
  web服务器            web-8                10.0.0.8          172.16.1.8             nginx/tomcat
  web服务器            web-9                10.0.0.9          172.16.1.9             nginx/php
- 存储服务器           nfs-31               10.0.0.31         172.16.1.31           nfs/rsyncd/lsyncd
  备份服务器           rsync-41             10.0.0.41         172.16.1.41             nfs/rsyncd/lsyncd
- 数据库服务器         db-51                10.0.0.51         172.16.1.51             mysql/redis
- yum本地仓库          yum-88               10.0.0.88         172.16.1.88                nginx

配置规划

系统 : centos7

内存 : 管理机 4g 其他512g

手动部署高可用集群架构

先手动部署一遍高可用集群架构,后面使用ansible-playbook一键部署

一 搭建本地yum源

在搭建集群的时候我们需要保证安装的版本一致,并且使用本地yum源安装速度很快,不会受到网络影响。现在我们在10.0.0.88上搭建本地yum源。

1 配置Http的Yum源—Yum服务端

1.1 创建目录

建一个本地仓库路径(如/etc/yum.repos.d/local_rpm)

mkdir /etc/yum.repos.d/local_rpm

1.2 安装Createrepo软件:

yum -y install createrepo  

1.3 创建仓库

createrepo /etc/yum.repos.d/local_rpm/

1.4 上传Yum安装包到目录

这里可以使用当前的yum 配置下载以下rpm 包 但是不安装。或者是直接准备好rpm包。下面是下载不安装的示例

yum install --downloadonly --downloaddir=/etc/yum.repos.d/local_rpm vsftpd

如果后期本地yum源没有需要的rpm包,可以使用上面的命令下载然后更新rpm包

createrepo --update /etc/yum.repos.d/local_rpm

1.5 安装Nginx

yum install nginx -y

1.6 修改Nginx配置

vim /etc/nginx/conf.d/yum.conf
[root@yum88 ~]#cat /etc/nginx/conf.d/yum.conf 
server{
  listen 19988;
  server_name _;
  charset utf-8,gbk;
  location / {
  root /etc/yum.repos.d/local_rpm/;
  autoindex on;
  autoindex_localtime on;
  autoindex_exact_size off;
  
  }
}

1.7 启动Nginx

systemctl start nginx

1.8 到此就是已经完成用Nginx搭建在线下载Yum源

2 客户端配置

2.1 配置Yum Config

cat > /etc/yum.repos.d/88.repo << 'EOF'
[88-rpm]
name=88 yum repo
baseurl=http://10.0.0.88:19988
enabled=1
gpgcheck=0
EOF

2.2 清除Yum机制的本地缓存

<span role="presentation"><span class="cm-variable">yum</span> <span class="cm-variable">clean</span> <span class="cm-variable">all</span></span>

二 搭建nfs共享储存服务

nfs31搭建nfs服务器,创建共享文件夹(提供静态文件),为web7,web8,web9,提供网站静态数据,本次实验仅将网站图片作为静态资源共享。

构建高可用集群架构,高可用集群搭建

安装nfs服务

nfs-utils:NFS服务的主程序,包括了rpc.nfsd、rpc.mountd这两个守护进程以及相关文档,命令

rpcbind:是centos7/6环境下的RPC程序

[root@nfs31 ~]#yum install nfs-utils rpcbind -y

创建www用户组以及用户

创建用户组和用户可以让管理员更好地控制NFS共享文件系统的访问权限,保证系统的安全性和稳定性。

[root@nfs31 ~]#groupadd -g 666 www
[root@nfs31 ~]#useradd -g www -u 666 -M -s /sbin/nologin www
[root@nfs31 ~]#id www
uid=666(www) gid=666(www) groups=666(www)

创建共享目录

创建共享目录 /web-data/wordpress,并且将目录的所有者和所属组改为www。

[root@nfs31 ~]#mkdir -p /web-data/wordpress
[root@nfs31 ~]#chown www.www -R /web-data/wordpress

修改配置文件

nfs默认的配置文件在/etc/exports

[root@nfs31 ~]#vim /etc/exports
[root@nfs31 ~]#cat /etc/exports
/web-data/wordpress/   172.16.1.0/24(rw,sync,all_squash,anonuid=666,anongid=666)
配置文件参数解释 /web-data/wordpress/   NFS共享目录 172.16.1.0/24         NFS客户端地址 rw 读写 sync 数据同步写入到内存与硬盘,优点数据安全,缺点性能较差 all_squash 所有nfs客户端用户映射为匿名用户,生产常用参数,降低用户权限,增大安全性。 anonuid=666和anongid=666指定了匿名用户的用户ID和用户组ID。

启动nfs

[root@nfs31 ~]#systemctl start nfs

安装lsyncd

[root@nfs31 ~]#yum install lsyncd -y

修改lsyncd配置文件

[root@nfs31 ~]#vim /etc/lsyncd.conf
[root@nfs31 ~]#cat /etc/lsyncd.conf
settings {
  logfile     ="/var/log/lsyncd/lsyncd.log",
  statusFile   ="/var/log/lsyncd/lsyncd.status",
  inotifyMode = "CloseWrite",
  maxProcesses = 8,
  }
​
sync {
  default.rsync,
  source   = "/web-data/wordpress/",
  target   = "rsync_backup@172.16.1.41::data",
  delete= true,
  exclude = {".*"},
  delay=1,
  rsync     = {
      binary   = "/usr/bin/rsync",
      archive   = true,
      compress = true,
      verbose   = true,
      password_file="/etc/rsync.passwd",
      _extra={"--bwlimit=200"}
      }
  }

配置lsyncd密码

lsyncd密码配置文件只用填写密码即可,文件的权限必须为600

[root@nfs31 ~]#vim /etc/rsync.passwd
[root@nfs31 ~]#cat /etc/rsync.passwd
zxx123
[root@nfs31 ~]#chmod 600 /etc/rsync.passwd
[root@nfs31 ~]#ll /etc/rsync.passwd
-rw------- 1 root root 7 Mar 28 12:06 /etc/rsync.passwd

启动lsybcd 先不启动需要部署好rsync才可用启动

[root@nfs31 ~]#systemctl start lsyncd

三 部署rsync服务端

创建www用户组以及用户

[root@rsync41 ~]#groupadd -g 666 www
[root@rsync41 ~]#useradd -g www -u 666 -M -s /sbin/nologin www
[root@rsync41 ~]#id www
uid=666(www) gid=666(www) groups=666(www)

创建目录授权

[root@rsync41 ~]#mkdir /data/wordpress -p
[root@rsync41 ~]#chown www.www -R /data/wordpress
[root@rsync41 ~]#ll -d /data/wordpress
drwxr-xr-x 2 www www 6 Mar 28 12:53 /data/wordpress

安装rsync服务

[root@rsync41 ~]#yum install rsync -y

修改配置文件

[root@rsync41 ~]#vim /etc/rsync.passwd
[root@rsync41 ~]#cat /etc/rsync.passwd
rsync_backup:zxx123
[root@rsync41 ~]#vim /etc/rsyncd.conf
[root@rsync41 ~]#cat /etc/rsyncd.conf
uid = www
gid = www
port = 873
fake super = yes
use chroot = no
max connections = 200
timeout = 600
ignore errors
read only = false
list = false
auth users = rsync_backup
secrets file = /etc/rsync.passwd
log file = /var/log/rsyncd.log
[mybackup]
comment = chaoge rsync backup!
path = /backup
[data]
comment = wordpress
path = /data/wordpress
[root@rsync41 ~]#chmod 600 /etc/rsync.passwd

启动rsyncd

[root@rsync41 ~]#systemctl start rsyncd

四 部署web服务器

创建www用户

[root@web8 ~]#groupadd -g 666 www
[root@web8 ~]#useradd -g www -u 666 -M -s /sbin/nologi

配置php环境

yum install -y php71w-cli php71w-common php71w-devel php71w-embedded php71w-gd php71w-mcrypt php71w-mbstring php71w-pdo php71w-xml php71w-fpm php71w-mysqlnd php71w-opcache php71w-pecl-memcached php71w-pecl-redis php71w-pecl-mongodb php71w-json php71w-pecl-apcu php71w-pecl-apcu-devel

修改php配置文件

[root@web8 ~]#sed -i '/^user/c user = www' /etc/php-fpm.d/www.conf
[root@web8 ~]#sed -i '/^group/c group = www' /etc/php-fpm.d/www.conf
[root@web8 ~]#grep -E '^(user|group)' /etc/php-fpm.d/www.conf
user = www
group = www

启动php-fpm

[root@web8 ~]#systemctl start php-fpm.service
[root@web8 ~]#systemctl enable php-fpm.service
Created symlink from /etc/systemd/system/multi-user.target.wants/php-fpm.service to /usr/lib/systemd/system/php-fpm.service.

安装nginx

[root@web8 ~]#yum install nginx -y

喜阳光配置文件

[root@web8 ~]#sed -i '/^user/c user www;' /etc/nginx/nginx.conf
[root@web8 ~]#sed -n '/^user/p' /etc/nginx/nginx.conf
user www;

删除nginx默认配置文件

[root@web8 ~]#rm -f /etc/nginx/conf.d/default.conf

添加nginx配置文件

[root@web8 ~]#vim /etc/nginx/conf.d/wordpress.conf
[root@web8 ~]#cat /etc/nginx/conf.d/wordpress.conf
server{
  listen 80;
  server_name wordpress.linux.cc;
​
  # 静态请求资源存放路径
  root /www/wordpress;
  index index.php index.html;
​
  # 动态请求处理
  location ~ \.php$ {
​
      root /www/wordpress;
      fastcgi_pass 127.0.0.1:9000;
      fastcgi_index index.php;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      include fastcgi_params;
  }

创建wordpress网站目录

[root@web8 ~]#mkdir /www
[root@web8 ~]#cd /www
[root@web8 /www]#rz -E #上传wordpress源码
[root@web8 /www]#unzip wordpress.zip
[root@web8 /www]#chown www.www -R /www

启动nginx

[root@web8 /www]#systemctl start nginx
[root@web8 /www]#systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service

安装nfs

为了挂载时可以识别

[root@web8 /www]#yum install nfs-utils -y

挂载nfs

[root@web8 /www]#mount -t nfs 172.16.1.31:/web-data/wordpress /www/wordpress/wp-content/uploads

五 部署ld负载均衡

创建www用户

[root@ld5 ~]#groupadd -g 666 www
[root@ld5 ~]#useradd -g www -u 666 -M -s /sbin/nologin
[root@ld5 ~]#id www
uid=666(www) gid=666(www) groups=666(www)

安装nginx

[root@ld5 ~]#yum install nginx -y

配置nginx

修改nginx用户

[root@ld5 ~]#sed -i '/^user/c user www;' /etc/nginx/nginx.conf
[root@ld5 ~]#sed -n '/^user/p' /etc/nginx/nginx.conf
user www;

删除nginx默认配置文件

[root@ld5 ~]#rm -f /etc/nginx/conf.d/default.conf

配置ld配置文件

[root@ld5 ~]#vim /etc/nginx/conf.d/ld.conf
[root@ld5 ~]#cat /etc/nginx/conf.d/ld.conf
upstream web-pools{
server 172.16.1.7:80;
server 172.16.1.8:80;  
}
server{
  listen 80;
  server_name _;
  location / {
  proxy_pass http://web-pools;
  include /etc/nginx/proxy_params.conf;
}
}

配置proxy配置文件

<span role="presentation">[</span>
root@ld5 ~]#vim /etc/nginx/proxy_params.conf
[root@ld5 ~]#cat /etc/nginx/proxy_params.conf
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 30;
proxy_send_timeout 60;
proxy_read_timeout 60;
proxy_buffering on;
proxy_buffer_size 32k;
proxy_buffers 4 128k;

启动nginx

[root@ld5 ~]#systemctl start nginx
[root@ld5 ~]#systemctl enable nginx

安装keepalived

[root@ld5 ~]#yum install keepalived -y

配置keepalived

ld5配置为master vip为10.0.0.3

[root@ld5 ~]#vim /etc/keepalived/keepalived.conf
[root@ld5 ~]#cat /etc/keepalived/keepalived.conf
global_defs {
  router_id lb-5
}
​
vrrp_instance VIP_1 {
  state MASTER
  interface eth0
  virtual_router_id 50
  priority 150
  advert_int 1
  authentication {
      auth_type PASS
      auth_pass 1111
  }
  virtual_ipaddress {
      10.0.0.3
  }
}
​

ld6配置为backup

global_defs {
    router_id lb-6
}
​
vrrp_instance VIP_1 {
    state BACKUP
    interface eth0
    virtual_router_id 50
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.0.0.3
    }
}
[root@ld5 ~]#systemctl start keepalived.service
[root@ld5 ~]#systemctl enable keepalived.service
Created symlink from /etc/systemd/system/multi-user.target.wants/keepali

六 MHA架构部署

mysql安装启动

db51-db52-db53一起操作

创建mysql目录

mkdir -p /data/soft/
mkdir /var/log/mysql  
mkdir -p /data/mysql_3306/
mkdir -p /opt/mysql5.7.28/
mkdir /binlog
cd /data/soft/
​
wget https://cdn.mysql.com/archives/mysql-5.7/mysql-5.7.28-linux-glibc2.12-x86_64.tar.gz
​
tar -xf mysql-5.7.28-linux-glibc2.12-x86_64.tar.gz
mv /data/soft/mysql-5.7.28-linux-glibc2.12-x86_64/* /opt/mysql5.7.28/
ln -s /opt/mysql5.7.28/ /opt/mysql
​
echo 'export PATH=$PATH:/opt/mysql/bin' >> /etc/profile
source /etc/profile [root@db53 /data/soft]#mysql -V 检查安装是否正确  5.安装mysql5.7的依赖包
yum install libaio-devel -y 6.创建mysql用户与目录授权
useradd -s /sbin/nologin -M mysql
chown -R mysql.mysql /data/
chown -R mysql.mysql /opt/mysql*
chown -R mysql.mysql /var/log/mysql
chown -R mysql.mysql /binlog/  

初始化数据库

mysqld --initialize-insecure --user=mysql --basedir=/opt/mysql --datadir=/data/mysql_3306

配置文件

db51

cat >/etc/my.cnf <<'EOF'
[mysqld]
user=mysql
datadir=/data/mysql_3306
basedir=/opt/mysql/
socket=/tmp/mysql.sock
port=3306
log_error=/var/log/mysql/mysql.err
server_id=51
log_bin=/binlog/mysql-bin
​
autocommit=0
binlog_format=row
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
​
[mysql]
socket=/tmp/mysql.sock
​
[client]
socket=/tmp/mysql.sock
EOF

db52

cat >/etc/my.cnf <<'EOF'
[mysqld]
user=mysql
datadir=/data/mysql_3306
basedir=/opt/mysql/
socket=/tmp/mysql.sock
port=3306
log_error=/var/log/mysql/mysql.err
server_id=52
log_bin=/binlog/mysql-bin
​
autocommit=0
binlog_format=row
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
​
[mysql]
socket=/tmp/mysql.sock
​
[client]
socket=/tmp/mysql.sock
EOF

db53

cat >/etc/my.cnf <<'EOF'
[mysqld]
user=mysql
datadir=/data/mysql_3306
basedir=/opt/mysql/
socket=/tmp/mysql.sock
port=3306
log_error=/var/log/mysql/mysql.err
server_id=53
log_bin=/binlog/mysql-bin
​
autocommit=0
binlog_format=row
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
​
[mysql]
socket=/tmp/mysql.sock
​
[client]
socket=/tmp/mysql.sock
EOF

拷贝启动脚本

cp /opt/mysql/support-files/mysql.server /etc/init.d/mysqld

启动mysql

systemctl daemon-reload
systemctl enable mysqld
systemctl restart mysqld
systemctl status mysqld

配置基于DTID主从复制

ld51 主库、用户创建

mysql> grant replication slave on*.* to repl@'10.0.0.%' identified by 'zxx123';
Query OK, 0 rows affected, 1 warning (0.00 sec)

主库gtid检查

mysql> show global variables like '%gtid%';
+----------------------------------+----------------------------------------+
| Variable_name                   | Value                                 |
+----------------------------------+----------------------------------------+
| binlog_gtid_simple_recovery     | ON                                     |
| enforce_gtid_consistency         | ON                                     |
| gtid_executed                   | a5486579-cf06-11ed-8789-000c2911fe3c:1 |
| gtid_executed_compression_period | 1000                                   |
| gtid_mode                       | ON                                     |
| gtid_owned                       |                                       |
| gtid_purged                     |                                       |
| session_track_gtids             | OFF                                   |
+----------------------------------+----------------------------------------+
8 rows in set (0.01 sec)
​
mysql> show global variables like 'server%';
+----------------+--------------------------------------+
| Variable_name | Value                               |
+----------------+--------------------------------------+
| server_id     | 51                                   |
| server_id_bits | 32                                   |
| server_uuid   | a5486579-cf06-11ed-8789-000c2911fe3c |
+----------------+--------------------------------------+
3 rows in set (0.01 sec)
​
mysql> show master status;
+------------------+----------+--------------+------------------+----------------------------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set                     |
+------------------+----------+--------------+------------------+----------------------------------------+
| mysql-bin.000001 |     444 |             |                 | a5486579-cf06-11ed-8789-000c2911fe3c:1 |
+------------------+----------+--------------+------------------+----------------------------------------+
1 row in set (0.00 sec)
​

从库操作

mysql> show slave status\G
Empty set (0.00 sec)
​
change master to master_host='10.0.0.51', master_user='repl', master_password='zxx123' , MASTER_AUTO_POSITION=1;
​
start slave;

自此基于GTID的主从复制部署完成,下面部署HMA

MHA工具会检测mysql命令,这里还需要加软连接,这里需要通过which命名查看mysql,mysqlbinlog命令的具体路径。

1.软连接设置

ln -s /opt/mysql/bin/mysql /usr/bin/mysql
ln -s /opt/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog

2.节点全部互相通信

使db51,db52,db53三节点互相可用免密登录

ssh-keygen
yum install sshpass -y
sshpass -p '123123' ssh-copy-id 10.0.0.51 -o StrictHostKeyChecking=no
sshpass -p '123123' ssh-copy-id 10.0.0.52 -o StrictHostKeyChecking=no
sshpass -p '123123' ssh-copy-id 10.0.0.53 -o StrictHostKeyChecking=no

3.所有节点安装MHA-node

yum install mha4mysql-node.noarch -y

4.db53安装MHA-Manager

MHA管理节点可以装在任何节点,这里就给安装到了slave03 节点

因为Manager管理节点,通过ssh检测mysql集群,如果master节点服务器宕机,或者网络故障,MHA也无法完成故障切换了。

因此mha-manager不能装在master节点

[root@db53 ~]#yum install -y perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-ExtUtils-CBuilder perl-ExtUtils-MakeMaker perl-CPAN perl-Time-HiRes
[root@db53 ~]#yum install mha4mysql-manager.noarch -y
#如果不能安装需要安装epel源
yum install epel-release -y

所有节点创建mha用户

# 主库创建,三个机器也就同步了
# 在db51创建即可
mysql> grant all privileges on *.* to mha@'%' identified by 'zxx123';
mysql> flush privileges;
#查询用户权限 
mysql> show grants for 'mha'@'%';
+------------------------------------------+
| Grants for mha@%                         |
+------------------------------------------+
| GRANT ALL PRIVILEGES ON *.* TO 'mha'@'%' |
+------------------------------------------+
1 row in set (0.00 sec)

创建db-53的manager配置文件

mkdir -p /etc/mha           #<==在/etc下创建mha目录。
mkdir -p /var/log/mha/app1 #<==在/etc下创建mha目录。
# /etc/mha/app1.cnf       #<==编辑mha配置文件,增加配置内容。
配置文件
cat > /etc/mha/app1.cnf << 'EOF'
[server default]
manager_log=/var/log/mha/app1/manager.log
manager_workdir=/var/log/mha/app1.log
master_binlog_dir=/mysql_binlog/
master_ip_failover_script=/usr/local/bin/master_ip_failover
user=mha
password=zxx123
ping_interval=2
repl_user=repl
repl_password=zxx123
ssh_user=root
​
[server1]
hostname=10.0.0.51
port=3306
​
[server2]
hostname=10.0.0.52
port=3306
​
[server3]
hostname=10.0.0.53
port=3306
EOF

状态检查db-53

[root@db53 ~]#masterha_check_ssh --conf=/etc/mha/app1.cnf

1.开发VIP漂移脚本

cat > /usr/local/bin/master_ip_failover << 'EOF'
#!/usr/bin/env perl
​
use strict;
use warnings FATAL => 'all';
​
use Getopt::Long;
​
my (
    $command,          $ssh_user,        $orig_master_host, $orig_master_ip,
    $orig_master_port, $new_master_host, $new_master_ip,    $new_master_port
);
​
my $vip = '10.0.0.55/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";
my $ssh_Bcast_arp="/sbin/arping -I eth0 -c 3 -A 10.0.0.55";
​
GetOptions(
    'command=s'          => \$command,
    'ssh_user=s'         => \$ssh_user,
    'orig_master_host=s' => \$orig_master_host,
    'orig_master_ip=s'   => \$orig_master_ip,
    'orig_master_port=i' => \$orig_master_port,
    'new_master_host=s'  => \$new_master_host,
    'new_master_ip=s'    => \$new_master_ip,
    'new_master_port=i'  => \$new_master_port,
);
​
exit &main();
​
sub main {
​
    print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
​
    if ( $command eq "stop" || $command eq "stopssh" ) {
​
        my $exit_code = 1;
        eval {
            print "Disabling the VIP on old master: $orig_master_host \n";
            &stop_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn "Got Error: $@\n";
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "start" ) {
​
        my $exit_code = 10;
        eval {
            print "Enabling the VIP - $vip on the new master - $new_master_host \n";
            &start_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn $@;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ( $command eq "status" ) {
        print "Checking the Status of the script.. OK \n";
        exit 0;
    }
    else {
        &usage();
        exit 1;
    }
}
​
​
sub start_vip() {
    `ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
     return 0  unless  ($ssh_user);
    `ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
​
sub usage {
    print
    "Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
EOF
chmod +x /usr/local/bin/master_ip_failover

3.db-51主库添加VIP

# 创建
ifconfig eth0:1 10.0.0.55/24
# 删除
ifconfig eth0:1 del 10.0.0.55
# 停止
ifconfig eth0:1 down

启动

[root@db53 /]#nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover /var/log/mha/app1/manager.log 2>&1 &
​
​
# 停止命令
masterha_stop --conf=/etc/mha/app1.conf

检查MHA运行状态


[root@db53 ~]#masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:1515) is running(0:PING_OK), master:10.0.0.51

自此MHA部署完成,下面开始为安装wordpress做准备。

创建wordpreess数据库

[root@db51 ~]#mysql -uroot -pzxx123 -e 'create database wordpress'

运程授权

[root@db51 ~]#mysql -uroot -pzxx123 -e "grant all privileges on *.* to 'zxx'@'%' identified by 'zxx123'"

刷新数据库权限

[root@db51 ~]#mysql -uroot -pzxx123 -e 'flush privileges'

本文为原创文章,版权归所有,欢迎分享本文,转载请保留出处!

闲来生活
闲来生活 关注:1    粉丝:3 最后编辑于:2023-04-22
分享经验
×

予人玫瑰,手有余香

打赏 闲来生活

打开支付宝扫一扫,即可进行扫码打赏哦