找回密码
 立即注册
首页 业界区 安全 MySQL InnoDB Cluster节点重新加入集群踩坑案例 ...

MySQL InnoDB Cluster节点重新加入集群踩坑案例

邹弘丽 3 小时前
案例,一个MySQL InnoDB Cluster集群中两个节点处于MISSING状态. 接手后通过沟通发现出现这么一个状态的大概原因:InnoDB Cluster集群中三个节点被人同时重启,
后续反复重启折腾,而且出现问题后没有解决.而是将第一个节点作为引导节点,当前的具体状态如下所示集群节点信息做了混淆)
  1. MySQL  mysql01:3306 ssl  JS > cluster.status();
  2. {
  3.     "clusterName": "myCluster",
  4.     "defaultReplicaSet": {
  5.         "name": "default",
  6.         "primary": "mysql01:3306",
  7.         "ssl": "REQUIRED",
  8.         "status": "OK_NO_TOLERANCE_PARTIAL",
  9.         "statusText": "Cluster is NOT tolerant to any failures. 2 members are not active.",
  10.         "topology": {
  11.             "mysql01:3306": {
  12.                 "address": "mysql01:3306",
  13.                 "memberRole": "PRIMARY",
  14.                 "mode": "R/W",
  15.                 "readReplicas": {},
  16.                 "replicationLag": "applier_queue_applied",
  17.                 "role": "HA",
  18.                 "status": "ONLINE",
  19.                 "version": "8.0.39"
  20.             },
  21.             "mysql02:3306": {
  22.                 "address": "mysql02:3306",
  23.                 "instanceErrors": [
  24.                     "NOTE: group_replication is stopped."
  25.                 ],
  26.                 "memberRole": "SECONDARY",
  27.                 "memberState": "OFFLINE",
  28.                 "mode": "n/a",
  29.                 "readReplicas": {},
  30.                 "role": "HA",
  31.                 "status": "(MISSING)",
  32.                 "version": "8.0.39"
  33.             },
  34.             "mysql03:3306": {
  35.                 "address": "mysql03:3306",
  36.                 "instanceErrors": [
  37.                     "NOTE: group_replication is stopped."
  38.                 ],
  39.                 "memberRole": "SECONDARY",
  40.                 "memberState": "OFFLINE",
  41.                 "mode": "n/a",
  42.                 "readReplicas": {},
  43.                 "role": "HA",
  44.                 "status": "(MISSING)",
  45.                 "version": "8.0.39"
  46.             }
  47.         },
  48.         "topologyMode": "Single-Primary"
  49.     },
  50.     "groupInformationSourceMember": "mysql01:3306"
  51. }
  52. MySQL  mysql01:3306 ssl  JS >
复制代码
正常来说,顺利的话,只需要将节点重新加入集群,恢复数据即可.
  1. MySQL  mysql01:3306 ssl  JS > cluster.rejoinInstance('mysqladm@mysql02:3306')
  2. Validating instance configuration at mysql02:3306...
  3. This instance reports its own address as mysql02:3306
  4. Instance configuration is suitable.
  5. ERROR: A GTID set check of the MySQL instance at 'mysql02:3306' determined that it contains transactions that do not originate from the cluster, which must be discarded before it can join the cluster.
  6. mysql02:3306 has the following errant GTIDs that do not exist in the cluster:
  7. 0d3d8524-a642-11ef-849c-b496919a423e:36017566-36017580
  8. Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to rejoining the instance to the cluster.
  9. Discarding these extra GTID events can either be done manually or by completely overwriting the state of mysql02:3306 with a physical snapshot from an existing cluster member. To achieve this remove the instance from the cluster and add it back using <Cluster>.addInstance() and setting the 'recoveryMethod' option to 'clone'.
  10. ERROR: RuntimeError: The instance 'mysql02:3306' contains errant transactions that did not originate from the cluster.
  11. Cluster.rejoinInstance: The instance 'mysql02:3306' contains errant transactions that did not originate from the cluster. (RuntimeError)
复制代码
但是如上所示,当时InnoDB Cluster出现问题的时候,节点mysql01的事务不是最新的,但是已经将节点mysql01作为引导节点了, 现在节点mysql02重新加入集群,出现数据冲突了.
此时,应用程序运行在节点mysql01上(作为单节点)已经运行了一段时间了.也就是说节点mysql01上有大量的新数据. 只能将节点mysql02/mysql03从集群中移除,然后重新加入集群.
  1. MySQL  mysql01:3306 ssl  JS > cluster.removeInstance('mysqladm@mysql02:3306')
  2. ERROR: mysql02:3306 is reachable but has state OFFLINE
  3. To safely remove it from the cluster, it must be brought back ONLINE. If not possible, use the 'force' option to remove it anyway.
  4. Do you want to continue anyway (only the instance metadata will be removed)? [y/N]: yes
  5. The instance will be removed from the InnoDB Cluster.
  6. NOTE: Transaction sync was skipped
  7. NOTE: The instance 'mysql02:3306' is OFFLINE, Group Replication stop skipped.
  8. The instance 'mysql02:3306' was successfully removed from the cluster.
  9. MySQL  mysql01:3306 ssl  JS > cluster.status();
  10. {
  11.     "clusterName": "myCluster",
  12.     "defaultReplicaSet": {
  13.         "name": "default",
  14.         "primary": "mysql01:3306",
  15.         "ssl": "REQUIRED",
  16.         "status": "OK_NO_TOLERANCE_PARTIAL",
  17.         "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.",
  18.         "topology": {
  19.             "mysql01:3306": {
  20.                 "address": "mysql01:3306",
  21.                 "memberRole": "PRIMARY",
  22.                 "mode": "R/W",
  23.                 "readReplicas": {},
  24.                 "replicationLag": "applier_queue_applied",
  25.                 "role": "HA",
  26.                 "status": "ONLINE",
  27.                 "version": "8.0.39"
  28.             },
  29.             "mysql03:3306": {
  30.                 "address": "mysql03:3306",
  31.                 "instanceErrors": [
  32.                     "NOTE: group_replication is stopped."
  33.                 ],
  34.                 "memberRole": "SECONDARY",
  35.                 "memberState": "OFFLINE",
  36.                 "mode": "n/a",
  37.                 "readReplicas": {},
  38.                 "role": "HA",
  39.                 "status": "(MISSING)",
  40.                 "version": "8.0.39"
  41.             }
  42.         },
  43.         "topologyMode": "Single-Primary"
  44.     },
  45.     "groupInformationSourceMember": "mysql01:3306"
  46. }
复制代码
将节点mysql02重新加入集群, 正常来说,这一步应该也很顺利. 结果踩了一个大坑. 如下所示
  1. MySQL  mysql01:3306 ssl  JS > cluster.addInstance('mysqladm@mysql02:3306');
  2. WARNING: A GTID set check of the MySQL instance at 'mysql02:3306' determined that it contains transactions that do not originate from the cluster, which must be discarded before it can join the cluster.
  3. mysql02:3306 has the following errant GTIDs that do not exist in the cluster:
  4. 0d3d8524-a642-11ef-849c-b496919a423e:36017566-36017580
  5. WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of mysql02:3306 with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
  6. Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to choosing the clone recovery method.
  7. Please select a recovery method [C]lone/[A]bort (default Abort): C
  8. Validating instance configuration at mysql02:3306...
  9. This instance reports its own address as mysql02:3306
  10. Instance configuration is suitable.
  11. NOTE: Group Replication will communicate with other members using 'mysql02:3306'. Use the localAddress option to override.
  12. * Checking connectivity and SSL configuration...
  13. A new instance will be added to the InnoDB Cluster. Depending on the amount of
  14. data on the cluster this might take from a few seconds to several hours.
  15. Adding instance to the cluster...
  16. Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
  17. Clone based state recovery is now in progress.
  18. NOTE: A server restart is expected to happen as part of the clone process. If the
  19. server does not support the RESTART command or does not come back after a
  20. while, you may need to manually start it back.
  21. * Waiting for clone to finish...
  22. NOTE: mysql02:3306 is being cloned from mysql01:3306
  23. ERROR: The clone process has failed: Error dropping database (can't rmdir './backup/', errno: 17 - File exists) (1010)
  24. ERROR: Error dropping database (can't rmdir './backup/', errno: 17 - File exists)
  25. Cluster.addInstance: Error dropping database (can't rmdir './backup/', errno: 17 - File exists) (RuntimeError)
复制代码
在加入InnoDB Cluster集群的时候, 这个环境中MySQL的数据目录为/data/mysql, 但是备份目录/data/mysql/backup的挂载点backup放置在/data/mysql下, 而且这个目录的owner为root,不清楚当初部署的人为什么这样脑洞大开,结果导致节点加入Innodb Cluster集群的时候出现异常(发现踩了好大一个坑). 因为线程将backup当作数据库删除时遇到权限错误.无法删除目录/data/mysql/backup, 详情请见上面错误信息
  1. # df -h
  2. Filesystem                 Size  Used Avail Use% Mounted on
  3. ...........................................................
  4. /dev/mapper/vg00-mysql     500G  7.3G  493G   2% /data/mysql
  5. /dev/sdb                   3.7T   27G  3.7T   1% /data/mysql/backup
  6. ............................................................
复制代码
异常后,节点重新加入InnoDB Cluster集群就遇到了元数据报错信息.无法再加入InnoDB Cluster集群了.
  1. MySQL  mysql01:3306 ssl  JS > cluster.addInstance('mysqladm@mysql02:3306');
  2. ERROR: MySQL Error 1356: Failed to execute query on Metadata server mysql02:3306: View 'mysql_innodb_cluster_metadata.v2_this_instance' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them
  3. Cluster.addInstance: Failed to execute query on Metadata server mysql02:3306: View 'mysql_innodb_cluster_metadata.v2_this_instance' references invalid table(s) or column(s) or function(s) or definer/invoker of view lack rights to use them (MySQL Error 1356)
  4. MySQL  mysql01:3306 ssl  JS >
复制代码
此时,尝试了好几种方法后,最后采取将节点mysql02中的InnoDB Cluster元数据库mysql_innodb_cluster_metadata删除.如下所示:
  1. mysql>  SET GLOBAL super_read_only = OFF;
  2. Query OK, 0 rows affected (0.00 sec)
  3. mysql>  drop database mysql_innodb_cluster_metadata;
  4. Query OK, 12 rows affected (0.02 sec)
  5. mysql> exit
复制代码
然后将节点mysql02重新加入MySQL InnoDB Cluster集群.
  1. MySQL  mysql01:3306 ssl  JS > cluster.addInstance('mysqladm@mysql02:3306')
  2. NOTE: A GTID set check of the MySQL instance at 'mysql02:3306' determined that it is missing transactions that were purged from all cluster members.
  3. Clone based recovery was selected because it seems to be safely usable.
  4. Validating instance configuration at mysql02:3306...
  5. This instance reports its own address as mysql02:3306
  6. Instance configuration is suitable.
  7. NOTE: Group Replication will communicate with other members using 'mysql02:3306'. Use the localAddress option to override.
  8. * Checking connectivity and SSL configuration...
  9. A new instance will be added to the InnoDB Cluster. Depending on the amount of
  10. data on the cluster this might take from a few seconds to several hours.
  11. Adding instance to the cluster...
  12. Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
  13. Clone based state recovery is now in progress.
  14. NOTE: A server restart is expected to happen as part of the clone process. If the
  15. server does not support the RESTART command or does not come back after a
  16. while, you may need to manually start it back.
  17. * Waiting for clone to finish...
  18. NOTE: mysql02:3306 is being cloned from mysql01:3306
  19. ** Stage DROP DATA: Completed
  20. NOTE: mysql02:3306 is shutting down...
  21. * Waiting for server restart... ready
  22. * mysql02:3306 has restarted, waiting for clone to finish...
  23. ** Stage RESTART: Completed
  24. * Clone process has finished: 730.94 MB transferred in 6 sec (121.82 MB/s)
  25. Incremental state recovery is now in progress.
  26. * Waiting for distributed recovery to finish...
  27. NOTE: 'mysql02:3306' is being recovered from 'mysql01:3306'
  28. * Distributed recovery has finished
  29. The instance 'mysql02:3306' was successfully added to the cluster.
  30. MySQL  mysql01:3306 ssl  JS > cluster.status()
  31. {
  32.     "clusterName": "myCluster",
  33.     "defaultReplicaSet": {
  34.         "name": "default",
  35.         "primary": "mysql01:3306",
  36.         "ssl": "REQUIRED",
  37.         "status": "OK_NO_TOLERANCE",
  38.         "statusText": "Cluster is NOT tolerant to any failures.",
  39.         "topology": {
  40.             "mysql01:3306": {
  41.                 "address": "mysql01:3306",
  42.                 "memberRole": "PRIMARY",
  43.                 "mode": "R/W",
  44.                 "readReplicas": {},
  45.                 "replicationLag": "applier_queue_applied",
  46.                 "role": "HA",
  47.                 "status": "ONLINE",
  48.                 "version": "8.0.39"
  49.             },
  50.             "mysql02:3306": {
  51.                 "address": "mysql02:3306",
  52.                 "memberRole": "SECONDARY",
  53.                 "mode": "R/O",
  54.                 "readReplicas": {},
  55.                 "replicationLag": "applier_queue_applied",
  56.                 "role": "HA",
  57.                 "status": "ONLINE",
  58.                 "version": "8.0.39"
  59.             }
  60.         },
  61.         "topologyMode": "Single-Primary"
  62.     },
  63.     "groupInformationSourceMember": "mysql01:3306"
  64. }
复制代码
节点mysql03加入集群就非常简单,这里就不做过多叙述了. 关于接手项目或帮人处理问题时, 如果环境不熟悉或没有规范配置,很容易踩坑. 如果你接手一个陌生的生产环境.做任何操作记得谨慎小心.
扫描上面二维码关注我如果你真心觉得文章写得不错,而且对你有所帮助,那就不妨帮忙“推荐"一下,您的“推荐”和”打赏“将是我最大的写作动力!本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接.
来源:程序园用户自行投稿发布,如果侵权,请联系站长删除
免责声明:如果侵犯了您的权益,请联系站长,我们会及时删除侵权内容,谢谢合作!

相关推荐

您需要登录后才可以回帖 登录 | 立即注册