linux – 为什么重新启动导致我的ZFS镜像的一面成为UNAVAIL?

前端之家收集整理的这篇文章主要介绍了linux – 为什么重新启动导致我的ZFS镜像的一面成为UNAVAIL?前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我最近刚刚将批量数据存储池(ZFS On Linux 0.6.2,Debian Wheezy)从单设备vdev配置迁移到双向镜像vdev配置.

之前的池配置是:

NAME                     STATE     READ WRITE CKSUM
    akita                    ONLINE       0     0     0
      ST4000NM0033-Z1Z1A0LQ  ONLINE       0     0     0

在重新启动完成后一切都很好(我在重新启动完成后启动了一个擦除,只是为了让系统再次检查所有内容并确保它一切都很好):

pool: akita
 state: ONLINE
  scan: scrub repaired 0 in 6h26m with 0 errors on Sat May 17 06:16:06 2014
config:

        NAME                       STATE     READ WRITE CKSUM
        akita                      ONLINE       0     0     0
          mirror-0                 ONLINE       0     0     0
            ST4000NM0033-Z1Z1A0LQ  ONLINE       0     0     0
            ST4000NM0033-Z1Z333ZA  ONLINE       0     0     0

errors: No known data errors

但是,重新启动后,我收到一封电子邮件,通知我这个游泳池不是很好,花花公子.我看了看,这就是我所看到的:

pool: akita
  state: DEGRADED
 status: One or more devices could not be used because the label is missing or
         invalid.  Sufficient replicas exist for the pool to continue
         functioning in a degraded state.
 action: Replace the device using 'zpool replace'.
    see: http://zfsonlinux.org/msg/ZFS-8000-4J
   scan: scrub in progress since Sat May 17 14:20:15 2014
     316G scanned out of 1,80T at 77,5M/s,5h36m to go
     0 repaired,17,17% done
 config:

         NAME                       STATE     READ WRITE CKSUM
         akita                      DEGRADED     0     0     0
           mirror-0                 DEGRADED     0     0     0
             ST4000NM0033-Z1Z1A0LQ  ONLINE       0     0     0
             ST4000NM0033-Z1Z333ZA  UNAVAIL      0     0     0

 errors: No known data errors

擦洗是预期的;有一个cron作业设置,可以在重新启动时启动完整系统清理.但是,我绝对没想到新硬盘会从镜子里掉出来.

我定义映射到/ dev / disk / by-id / wwn- *名称的别名,如果这两个磁盘都给予ZFS免费统治以使用完整磁盘,包括处理分区:

# zpool history akita | grep ST4000NM0033
2013-09-12.18:03:06 zpool create -f -o ashift=12 -o autoreplace=off -m none akita ST4000NM0033-Z1Z1A0LQ
2014-05-15.15:30:59 zpool attach -o ashift=12 -f akita ST4000NM0033-Z1Z1A0LQ ST4000NM0033-Z1Z333ZA
#

这些是来自/etc/zfs/vdev_id.conf的相关行(我现在注意到Z1Z333ZA使用制表符分隔,而Z1Z1A0LQ行只使用空格,但老实说我看不出这里有什么相关性) :

alias ST4000NM0033-Z1Z1A0LQ             /dev/disk/by-id/wwn-0x5000c500645b0fec
alias ST4000NM0033-Z1Z333ZA     /dev/disk/by-id/wwn-0x5000c50065e8414a

当我看,/ dev / disk / by-id / wwn-0x5000c50065e8414a *如预期那样,但是/ dev / disk / by-vdev / ST4000NM0033-Z1Z333ZA *没有.

发出sudo udevadm触发器导致符号链接显示在/ dev / disk / by-vdev中.然而,ZFS似乎并没有意识到他们在那里(Z1Z333ZA仍显示为UNAVAIL).我认为可以预料到这一点.

我尝试更换相关设备,但没有真正的运气:

# zpool replace akita ST4000NM0033-Z1Z333ZA
invalid vdev specification
use '-f' to override the following errors:
/dev/disk/by-vdev/ST4000NM0033-Z1Z333ZA-part1 is part of active pool 'akita'
#

在引导过程中检测到两个磁盘(显示相关驱动器的dmesg日志输出):

[    2.936065] ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.936137] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    2.937446] ata4.00: ATA-9: ST4000NM0033-9ZM170,SN03,max UDMA/133
[    2.937453] ata4.00: 7814037168 sectors,multi 16: LBA48 NCQ (depth 31/32),AA
[    2.938516] ata4.00: configured for UDMA/133
[    2.992080] ata6: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
[    3.104533] ata6.00: ATA-9: ST4000NM0033-9ZM170,max UDMA/133
[    3.104540] ata6.00: 7814037168 sectors,AA
[    3.105584] ata6.00: configured for UDMA/133
[    3.105792] scsi 5:0:0:0: Direct-Access     ATA      ST4000NM0033-9ZM SN03 PQ: 0 ANSI: 5
[    3.121245] sd 3:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[    3.121372] sd 3:0:0:0: [sdb] Write Protect is off
[    3.121379] sd 3:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[    3.121426] sd 3:0:0:0: [sdb] Write cache: enabled,read cache: enabled,doesn't support DPO or FUA
[    3.122070] sd 5:0:0:0: [sdc] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[    3.122176] sd 5:0:0:0: [sdc] Write Protect is off
[    3.122183] sd 5:0:0:0: [sdc] Mode Sense: 00 3a 00 00
[    3.122235] sd 5:0:0:0: [sdc] Write cache: enabled,doesn't support DPO or FUA

两个驱动器都直接连接到主板;没有涉及车外控制器.

一时冲动,我做了:

# zpool online akita ST4000NM0033-Z1Z333ZA

似乎有效; Z1Z333ZA现在至少是在线和重新调整.在大约一个小时进入弹性器后,它扫描180G并重新启动24G,完成9.77%,这表明它没有完全恢复,而只是传输数据集增量.

老实说,我不确定这个问题是否与ZFS On Linux或udev有关(它有点像udev,但是为什么一个驱动器被检测得很好而不是另一个),但我的问题是如何制作确定在下次重启时不会再发生同样的事情?

如果有必要,我很乐意提供更多关于设置的数据;只是让我知道需要什么.

解决方法

这是一个似乎是 specific to Debian and Ubuntu variants的udev问题.我在Linux上运行的大部分ZFS都是使用CentOS / RHEL.

Similar threads on the ZFS discussion list have mentioned this.

见:scsi and ata entries for same hard drive under /dev/disk/by-id
ZFS on Linux/Ubuntu: Help importing a zpool after Ubuntu upgrade from 13.04 to 13.10,device IDs have changed

我不确定Debian / Ubuntu系统最确定的池设备方法是什么.对于RHEL,我更喜欢在通用池设备上使用设备WWN.但其他时候,设备名称/序列也很有用.但udev应该能够控制所有这一切.

# zpool status
  pool: vol1
 state: ONLINE
  scan: scrub repaired 0 in 0h32m with 0 errors on Sun Feb 16 17:34:42 2014
config:

        NAME                        STATE     READ WRITE CKSUM
        vol1                        ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x500000e014609480  ONLINE       0     0     0
            wwn-0x500000e0146097d0  ONLINE       0     0     0
          mirror-1                  ONLINE       0     0     0
            wwn-0x500000e0146090c0  ONLINE       0     0     0
            wwn-0x500000e01460fd60  ONLINE       0     0     0

猜你在找的Linux相关文章