[转自Oracle官方博客]一次服务器时间调整引发的实例宕机

前端之家收集整理的这篇文章主要介绍了[转自Oracle官方博客]一次服务器时间调整引发的实例宕机前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

https://blogs.oracle.com/database4cn/%E4%B8%80%E6%AC%A1%E6%9C%8D%E5%8A%A1%E5%99%A8%E6%97%B6%E9%97%B4%E8%B0%83%E6%95%B4%E5%BC%95%E5%8F%91%E7%9A%84%E5%AE%9E%E4%BE%8B%E5%AE%95%E6%9C%BA%E3%80%82

By: Sam Zhao @H_301_3@

问题描述:

1. 数据库实例突然crash,原因是ASMB有200多秒没有响应:

Mon Sep 04 15:07:47 2017
WARNING: ASMB has not responded for 200 seconds <<<<<<<<<<<<ASMB has not responsed for 200 seconds.
NOTE: ASM umbilicus running slower than expected,ASMB diagnostic requested after 200 seconds
NOTE: ASMB process state dumped to trace file /u01/app/oracle/diag/rdbms/iadw/iadw3/trace/iadw3_gen0_19179.trc
Mon Sep 04 15:07:49 2017
NOTE: ASMB terminating
Mon Sep 04 15:07:49 2017
Errors in file /u01/app/oracle/diag/rdbms/iadw/iadw3/trace/iadw3_asmb_19501.trc:
ORA-15064: communication failure with ASM instance
ORA-03113: end-of-file on communication channel
Process ID:
Session ID: 170 Serial number: 65161
Mon Sep 04 15:07:49 2017
Errors in file /u01/app/oracle/diag/rdbms/iadw/iadw3/trace/iadw3_asmb_19501.trc:
ORA-15064: communication failure with ASM instance
ORA-03113: end-of-file on communication channel
Process ID:
Session ID: 170 Serial number: 65161
USER (ospid: 19501): terminating the instance due to error 15064

2. 从system state dump上看,ASMB看起来没有什么问题:

Current Wait Stack:
Not in wait; last wait ended 3.321392 sec ago <<<<<<<<<<<<<<<Not in wait.
Wait State:
fixed_waits=0 flags=0x21 boundary=(nil)/-1
Session Wait History:
elapsed time of 3.321404 sec since last wait
0: waited for 'ASM background timer'
=0x0,=0x0,=0x0
wait_id=37936676 seq_num=57511 snap_id=1
wait times: snap=2.682436 sec,exc=2.682436 sec,total=2.682436 sec
wait times: max=infinite
wait counts: calls=0 os=0
occurred after 0.000022 sec of elapsed time
1: waited for 'ASM file Metadata operation'
msgop=0xc,locn=0x3,=0x0
wait_id=37936675 seq_num=57510 snap_id=1
wait times: snap=0.000454 sec,exc=0.000454 sec,total=0.000454 sec
wait times: max=infinite
wait counts: calls=0 os=0
occurred after 0.000017 sec of elapsed time

3. 但是从OSW上看,没有发现明显的资源匮乏情况,但是中间却缺了三分多钟的断档:

zzz ***Mon Sep 4 15:04:13 CST 2017
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
3 0 0 529160192 19412 31514216 0 0 82 48 0 0 1 0 99 0 0
0 0 0 529124032 19412 31514784 0 0 1545 23119 36620 37705 1 1 99 0 0
2 0 0 529126784 19412 31514712 0 0 1601 9056 28083 30263 1 0 99 0 0
zzz ***Mon Sep 4 15:04:23 CST 2017
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
5 0 0 529095360 19412 31514996 0 0 82 48 0 0 1 0 99 0 0
3 0 0 529118368 19412 31515228 0 0 1517 4540 20402 27856 1 1 98 0 0
52 0 0 529107936 19412 31515400 0 0 1206 3961 21105 31254 1 0 98 0 0
zzz ***Mon Sep 4 15:07:51 CST 2017 <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<15:04:23 到15:07:51之间没有任何记录
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
41 0 0 570421952 19412 31556616 0 0 82 48 0 0 1 0 99 0 0
16 0 0 578182976 19412 31575888 0 0 2129 35 25702 15760 1 8 91 0 0
5 0 0 582348800 19412 31607740 0 0 5209 40002 22122 19062 1 4 96 0 0
zzz ***Mon Sep 4 15:08:02 CST 2017
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 0 0 617279552 19412 31615300 0 0 82 48 0 0 1 0 99 0 0
2 0 0 624415168 19412 31617816 0 0 922 2 25322 20023 1 2 98 0 0
2 0 0 631768448 19412 31615728 0 0 1497 3 25405 22582 1 1 98 0 0

看到这里,一般的思考是OSW中间断档了3分多钟,是不是系统性能太差导致OSW没法生成?但是一般来讲,在断档之前一般都能看到一些先兆,比如block queue 剧增。但是这个案例里面没有此现象。 继续看OS log:

4. 在OSlog中看到关键性的一句话:

Sep 4 15:04:01 hnpb05nc crond: /usr/sbin/postdrop: /lib64/libssl.so.10: no version information available (required by /usr/lib64/MysqL/libMysqLclient.so.18)
Sep 4 15:04:21 hnpb05nc init.tfa: Checking/Starting TFA..
Sep 4 15:07:47 hnpb05nc systemd: Time has been changed <<<<<<<<<<<<<<<<<<<系统时间修改了。

5. 继续看看CTSSD 的trace:

2017-09-04 15:04:25.799241 : CTSS:3933169408: ctssslave_swm19: The offset is [2311562070 usec] and sync interval set to [4]<<<偏移量为2311秒
2017-09-04 15:04:25.799251 : CTSS:3933169408: ctsselect_msm: Sync interval returned in [4]
2017-09-04 15:04:25.799260 : CTSS:3937371904: ctssslave_msg_handler4_3: slave_sync_with_master finished sync process. Exiting clsctssslave_msg_handler

2017-09-04 15:04:26.800845 : CTSS:3933169408: ctssslave_swm19: The offset is [2311562609 usec] and sync interval set to [4]<<<偏移量为2311秒
2017-09-04 15:04:26.800856 : CTSS:3933169408: ctsselect_msm: Sync interval returned in [4]
2017-09-04 15:04:26.800864 : CTSS:3937371904: ctssslave_msg_handler4_3: slave_sync_with_master finished sync process. Exiting clsctssslave_msg_handler

2017-09-04 15:04:27.802328 : CTSS:3933169408: ctssslave_swm19: The offset is [2311563057 usec] and sync interval set to [4]<<<偏移量为2311秒
2017-09-04 15:04:27.802337 : CTSS:3933169408: ctsselect_msm: Sync interval returned in [4]
2017-09-04 15:04:27.802346 : CTSS:3937371904: ctssslave_msg_handler4_3: slave_sync_with_master finished sync process. Exiting clsctssslave_msg_handler

2017-09-04 15:07:47.065051 : CTSS:3933169408: ctssslave_swm19: The offset is [2509824742 usec] and sync interval set to [4]<<<偏移量剧增到2509秒
2017-09-04 15:07:47.065068 : CTSS:3933169408: ctsselect_msm: Sync interval returned in [4]
2017-09-04 15:07:47.065077 : CTSS:3937371904: ctssslave_msg_handler4_3: slave_sync_with_master finished sync process. Exiting

很明显,偏移量在问题期间发生了200秒左右的增长,而在之前,我们可以看到偏移量是相对稳定的!这个也间接说明了系统时间的调整。

这个故事:

事情是这样的,系统配置了ntp,由于一些问题ntp没有启动,但是由于已经配置了ntp,ctssd发现了ntp的配置文件所以ctssd只运行在观察者的角色。造成的结果就是系统时间不断出现偏差,直到系统管理员发现了这个问题并手工把系统时间往前调了200秒。。。 然后ASMB通过系统时间判断有200秒没有响应(当然情况不是这样了),然后就。。。

建议:

当然我们应该尽可能monitor系统并确保NTP的正常运行。如果我们确实需要手工大幅度调整系统时间,那么我们也应该先把RAC数据库关闭以后在做调整。

@H_301_3@ 原文链接:https://www.f2er.com/oracle/207480.html

猜你在找的Oracle相关文章