RAID 5(https://baike.baidu.com/item/RAID%205/10898513)这种磁盘冗余技术,确实不是万能的。所以才会甚至出现反RAID 5的团体(http://www.baarf.com/, http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt)。不过,在综合性价比方面,RAID 5目前还是最高的,特别是用SATA硬盘来组RAID 5......其实我严重不推荐。
为什么不推荐呢?从我十几年玩RAID的经验来看,SATA 硬盘还是组0+1比较好,这也是为什么这篇文标题里面有“又”的原因:上一次碰到的RAID 5故障就是5个SATA硬盘组合的RAID 5,在一个硬盘出现读写错误,掉线重建的期间另外一个硬盘也出现了读写错误,也掉线了,结果整个RAID 5从降级(DEGRADED)变成失效(FAILURE)。幸好这个RAID 5阵列本来放的就是可有可无的数据,一怒之下就干脆不做挽救了。在之前这一次RAID 5故障中,获得的经验就是:
1)SATA 硬盘组RAID 5必须买容错性能好的。一分钱一分货是硬道理,一定要看产品的Data Sheet的Reliability/Data Integrity,最简单的比较参数是Mean Time Between Failures, 即MTBF。这个阵列最后用5块Seagate Constellation ES系列硬盘(MTBF: 1.2 million)取代了原来的Seagate Barracuda 7200系列硬盘(MTBF: 750,000)。
2)从DEGRADED 的RAID 5中备份数据时,有条件时应该在操作系统中取消挂载(取消盘符),用备份软件直接备份分区,确保只有读操作而没有写操作。
言归正传,这一次的阵列故障轮到发生在SAS硬盘上了。这个SAS RAID 5有4个SAS硬盘(0~3号),阵列控制卡是DELL PERC 6i,芯片是LSI SAS1078,等价LSI MegaRAID SAS 8888ELP(参考:http://www.stephenyeong.idv.hk/wp/2011/09/dell-perc-lsi-mode/);管理软件叫DELL SAS RAID STORAGE MANAGER,实际就是LSI的MegaRAID Storage Manager的定制版。具体功能就不详细介绍了。
故障发生在操作系统启动了某个带有本地数据库的软件后,该软件从网上大量下载记录、与本地数据库内容进行校验并写入数据库。刚运行了5分钟,操作系统上就突然弹出窗口说1号硬盘掉线,阵列自动开始REBUILD。基于在上一次RAID 5的经验,我马上关闭了这个高并发读写的程序,同时打开管理软件检查阵列的情况并静待阵列重建。可惜没过几分钟,管理软件又弹窗提示:0号盘介质错误(medium error),阵列被刺穿(punctured)。这下子是心都凉了,难道这个阵列GAME OVER了?不过,REBUILD过程还在继续,并且能成功完成。
于是马上做备份,不过备份过程中,系统提示有磁盘读写错误。检查阵列的日志内容,找到和“puncturing”相关的内容如下:
ID = 2495
SEQUENCE NUMBER = 42231
TIME = 22-10-2013 09:23:50
LOCALIZED MESSAGE = Controller ID: 0 Puncturing bad block: PD --:--:0 Location 0x1e877a3ID = 2494
SEQUENCE NUMBER = 42230
TIME = 22-10-2013 09:23:50
LOCALIZED MESSAGE = Controller ID: 0 Puncturing bad block: PD --:--:1 Location 0x1e877a3ID = 2493
SEQUENCE NUMBER = 42229
TIME = 22-10-2013 09:23:49
LOCALIZED MESSAGE = Controller ID: 0 Unrecoverable medium error during recovery: PD --:--:0 Location 0x1e877a3ID = 2492
SEQUENCE NUMBER = 42228
TIME = 22-10-2013 09:23:49
LOCALIZED MESSAGE = Controller ID: 0 Unexpected sense: PD = --:--:0, CDB = 0x28 0x00 0x01 0xe8 0x77 0xa0 0x00 0x00 0x20 0x00 , Sense = 0xf0 0x00 0x03 0x01 0xe8 0x77 0xa3 0x0a 0x00 0x00 0x00 0x00 0x11 0x00 0x81 0x80 0x00 0x97
上网查一下什么是puncturing,在DELL的网站找到了详细的解释:
RAID阵列中的双重故障和穿孔情况
所谓puncture,原来是为了避免RAID 5阵列中同时出现两个或以上的硬盘有故障时导致阵列失效而无法读取(备份)数据的问题而设计的一种机制。通过控制器模拟并向操作系统报告坏扇区的方式,使得除介质故障点之外的数据区仍然可读,且整个阵列不会失效。简单说,就是尽量把损失减到最小。
但是,一个戳穿的阵列如何能恢复正常呢?DELL 给出了以下的做法:
1)首先备份数据
2)放弃所有CACHE数据
3)清除引入的任何外部阵列配置
4)删除阵列
5)阵列里面所有的磁盘都换一个槽位。这一点最重要。
6)重建阵列
7)选择FULL INITIALIZATION而不是FAST INITIALIAZTION。注:控制器有时候会自动开始进行背景的快速初始化,取消之。
8)通过CHECK CONSISTENCY操作检查阵列完整性。如果检查结果没有错误,那么可以认为阵列已经处于正常状态。然后就可以还原数据。
经过简单测试后发现,如果只是简单地删除阵列重建,puncture状态确实是不会消除的。于是,严格地按照DELL的方法进行处理,同时用新硬盘替换掉出现实质性medium error的0号硬盘。阵列创建完成并全面检查后确认已恢复正常状态。最后的实际损失就是出现磁盘错误的2个文件,不过都是可损失的。
亡羊补牢,对RAID进行全面检查并总结,得出以下经验:
1)本次硬盘掉线表面上是1号盘掉线引起,而经检查RAID卡日志发现实际上是从0号盘开始。在1号盘掉线前,0号盘刚好又碰上了一个medium error,在raid卡进行校正时,由于系统正在处于高读写并发期间,1号盘顶不住压力没有跟上(这个涉及到RAID硬盘之间如何实现同步的工作机制,这里不详述)才掉线的。相关日志内容如下(倒序):
ID = 2490
SEQUENCE NUMBER = 42222
TIME = 22-10-2013 09:02:59
LOCALIZED MESSAGE = Controller ID: 0 State change: PD = --:--:1 Previous = Offline Current = Rebuild
ID = 2489
SEQUENCE NUMBER = 42221
TIME = 22-10-2013 09:02:59
LOCALIZED MESSAGE = Controller ID: 0 Rebuild automatically started: PD --:--:1
ID = 2488
SEQUENCE NUMBER = 42220
TIME = 22-10-2013 09:02:59
LOCALIZED MESSAGE = Controller ID: 0 State change: PD = --:--:1 Previous = Unconfigured Good Current = Offline
ID = 2487
SEQUENCE NUMBER = 42219
TIME = 22-10-2013 09:02:59
LOCALIZED MESSAGE = Controller ID: 0 State change: PD = --:--:1 Previous = Unconfigured Bad Current = Unconfigured Good
ID = 2486
SEQUENCE NUMBER = 42218
TIME = 22-10-2013 09:02:58
LOCALIZED MESSAGE = Controller ID: 0 Device inserted Device Type: Disk Device Id: : :1
ID = 2485
SEQUENCE NUMBER = 42217
TIME = 22-10-2013 09:02:58
LOCALIZED MESSAGE = Controller ID: 0 PD inserted: --:--:1
ID = 2484
SEQUENCE NUMBER = 42216
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 State change: PD = --:--:1 Previous = Failed Current = Unconfigured Bad
ID = 2483
SEQUENCE NUMBER = 42215
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 VD is now DEGRADED VD 3
ID = 2482
SEQUENCE NUMBER = 42214
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 State change on VD: 3 Previous = Optimal Current = Degraded
ID = 2481
SEQUENCE NUMBER = 42213
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 VD is now DEGRADED VD 2
ID = 2480
SEQUENCE NUMBER = 42212
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 State change on VD: 2 Previous = Optimal Current = Degraded
ID = 2479
SEQUENCE NUMBER = 42211
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 VD is now DEGRADED VD 1
ID = 2478
SEQUENCE NUMBER = 42210
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 State change on VD: 1 Previous = Optimal Current = Degraded
ID = 2477
SEQUENCE NUMBER = 42209
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 VD is now DEGRADED VD 0
ID = 2476
SEQUENCE NUMBER = 42208
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 State change on VD: 0 Previous = Optimal Current = Degraded
ID = 2475
SEQUENCE NUMBER = 42207
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 State change: PD = --:--:1 Previous = Online Current = Failed
ID = 2474
SEQUENCE NUMBER = 42206
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 Device removed Device Type: Disk Device Id: : :1
ID = 2473
SEQUENCE NUMBER = 42205
TIME = 22-10-2013 09:02:42
LOCALIZED MESSAGE = Controller ID: 0 PD removed: --:--:1
ID = 2472
SEQUENCE NUMBER = 42204
TIME = 22-10-2013 09:02:41
LOCALIZED MESSAGE = Controller ID: 0 PD Reset: PD = --:--:1, Error = 3, Path = 50:00:cc:a0:0f:b8:5e:e5
ID = 2471
SEQUENCE NUMBER = 42203
TIME = 22-10-2013 09:00:13
LOCALIZED MESSAGE = Controller ID: 0 Time established since power on: Time 2013-10-22,09:00:13 1571 Seconds
ID = 2470
SEQUENCE NUMBER = 42202
TIME = 22-10-2013 08:50:31
LOCALIZED MESSAGE = Controller ID: 0 Corrected medium error during recovery: PD --:--:0 Location 0x302fd07
ID = 2469
SEQUENCE NUMBER = 42201
TIME = 22-10-2013 08:50:30
LOCALIZED MESSAGE = Controller ID: 0 Unexpected sense: PD = --:--:0, CDB = 0x28 0x00 0x03 0x02 0xfd 0x00 0x00 0x00 0x20 0x00 , Sense = 0xf0 0x00 0x03 0x03 0x02 0xfd 0x07 0x0a 0x00 0x00 0x00 0x00 0x11 0x00 0x81 0x80 0x00 0x97
ID = 2468
SEQUENCE NUMBER = 42200
TIME = 22-10-2013 08:50:28
LOCALIZED MESSAGE = Controller ID: 0 Unexpected sense: PD = --:--:0, CDB = 0x28 0x00 0x03 0x02 0xfd 0x00 0x00 0x00 0x20 0x00 , Sense = 0xf0 0x00 0x03 0x03 0x02 0xfd 0x07 0x0a 0x00 0x00 0x00 0x00 0x11 0x00 0x81 0x80 0x00 0x97
2)高读写并发操作对传统硬盘的确是压力。所以有足够预算的管理员都会直接做8硬盘的RAID 5(甚至RAID 6)。可惜我是预算不足。
3)硬盘故障早有预兆。通过检查RAID卡之前的日志,发现在几个月前0号盘在patrol read期间已经多次出现medium error,不过由于是raid 5所以都得到了校正,相关日志如下:
ID = 1959
SEQUENCE NUMBER = 41644
TIME = 08-10-2013 20:43:58
LOCALIZED MESSAGE = Controller ID: 0 Patrol Read corrected medium error: PD --:--:0 Location 0x1b408247
由于都校正了,管理软件按默认设置没有弹窗警告。这样看这些管理软件还是不够智能,应该设计一个计数阈值,出现corrected medium error多于这个值就提示用户前瞻性地更换磁盘才对。
在查看硬盘参数时,某些厂商的Data Sheet使用Annualized failure rate (AFR,年故障率) 指标而不是MTBF(平均无故障时间)。其实是一样的。两者可以相互转换:
MTBF=一年内的小时数/AFR
本站微信订阅号:
本页网址二维码: