LVM:“找不到具有uuid的设备”但是blkid找到了UUID

前端之家收集整理的这篇文章主要介绍了LVM:“找不到具有uuid的设备”但是blkid找到了UUID前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我有一个SLES 11.2 PPC(3.0.58-0.6.6-ppc64)系统,它失去了对其卷组的跟踪(包含数据并不重要的LV,但回来会很好).磁盘通过SAN的两条光纤路径连接.

当我在上周五计划停电之前重新启动它时,问题就出现了.我没有时间进行故障排除再次将其关闭.该卷组先前已成功使用了大约两年.

vgscan和pvscan什么都不返回:

# pvscan -vP
Partial mode. Incomplete logical volumes will be processed.
 Wiping cache of LVM-capable devices
 Wiping internal VG cache
 Walking through all physical volumes
No matching physical volumes found
# vgscan -vP
Partial mode. Incomplete logical volumes will be processed.
  Wiping cache of LVM-capable devices
  Wiping internal VG cache
Reading all physical volumes.  This may take a while...
  Finding all volume groups
No volume groups found

vgcfgrestore报告无法找到PV:

# vgcfgrestore vgclients
Couldn't find device with uuid PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU.
Couldn't find device with uuid FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2.
Cannot restore Volume Group vgclients with 2 PVs marked as missing.
Restore Failed.

然而,blkid可以找到那些UUID:

# blkid -t UUID=PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU
/dev/mapper/3600a0b800029df24000011084db97741: UUID="PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU" TYPE="LVM2_member" 
/dev/sdl: UUID="PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU" TYPE="LVM2_member" 
/dev/sdw: UUID="PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU" TYPE="LVM2_member" 
# blkid -t UUID=FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2
/dev/mapper/3600a0b800029df24000017ae4f45f30b: UUID="FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2" TYPE="LVM2_member" 
/dev/sdg: UUID="FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2" TYPE="LVM2_member" 
/dev/sdr: UUID="FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2" TYPE="LVM2_member"

/ etc / lvm / backup / vgclients具有所有正确的信息,并没有说PV缺失:

# egrep "(N1YELU|kSqUA2|dm-|ALLOC)" /etc/lvm/backup/vgclients
                    id = "PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU"
                    device = "/dev/dm-7"    # Hint only
                    status = ["ALLOCATABLE"]
                    id = "FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2"
                    device = "/dev/dm-12"   # Hint only
                    status = ["ALLOCATABLE"]

我在SAN上确认了该服务器上LVM专用(和命名)的卷,并且标识符(以f30b或7741结尾)在SAN和服务器上匹配:

# multipath -ll | egrep -A5 "(f30b|7741)"
3600a0b800029df24000017ae4f45f30b dm-7 IBM,1814      FAStT
size=575G features='1 queue_if_no_path' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 6:0:0:1   sdr  65:16  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 5:0:0:1   sdg  8:96   active ghost running
--
3600a0b800029df24000011084db97741 dm-12 IBM,1814      FAStT
size=834G features='1 queue_if_no_path' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| `- 5:0:0:7   sdl  8:176  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  `- 6:0:0:7   sdw  65:96  active ghost running

两个设备都没有分区表(按设计):

# fdisk -l /dev/dm-7 /dev/dm-12 | grep table
Disk /dev/dm-7 doesn't contain a valid partition table
Disk /dev/dm-12 doesn't contain a valid partition table

我可以直接从设备上读取:

# dd if=/dev/dm-7 of=/tmp/a bs=1024 count=1
1+0 records in
1+0 records out
1024 bytes (1.0 kB) copied,0.00121051 s,846 kB/s
# strings /tmp/a
LABELONE
LVM2 001FXfSAOP9hODgtl0Ihfx2jXTnHUkSqUA2

我尝试重新启动并删除sd(r | g | l | w)和dm-(7 | 12)并重新扫描,但没有效果.

我尝试使用备份值重新创建PV,但它仍然说它找不到它们.

# pvcreate --uuid "PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU" --restorefile /etc/lvm/backup/vgclients /dev/mapper/3600a0b800029df24000011084db97741 -t
  Test mode: Metadata will NOT be updated and volumes will not be (de)activated.
  Couldn't find device with uuid PyKfIa-cCs9-gBoh-Qb50-yOw4-dHQw-N1YELU.
  Couldn't find device with uuid FXfSAO-P9hO-Dgtl-0Ihf-x2jX-TnHU-kSqUA2.
  Device /dev/mapper/3600a0b800029df24000011084db97741 not found (or ignored by filtering).

这是我的lvm.conf,虽然据我所知,我所做的唯一改变是增加日志级别:

# egrep -v "^( *#|$)" /etc/lvm/lvm.conf
devices {
    dir = "/dev"
    scan = [ "/dev" ]
    preferred_names = [ ]
    filter = [ "a|^/dev/sda$|","r/.*/" ]
    cache = "/etc/lvm/.cache"
    write_cache_state = 1
    sysfs_scan = 1      
    md_component_detection = 1
    ignore_suspended_devices = 0
}
log {
    verbose = 0
    syslog = 1
    overwrite = 0
    level = 2

    indent = 1
    command_names = 0
    prefix = "  "
}
backup {
    backup = 1
    backup_dir = "/etc/lvm/backup"
    archive = 1
    archive_dir = "/etc/lvm/archive"

    retain_min = 10
    retain_days = 30
}
shell {
    history_size = 100
}
global {

    umask = 077
    test = 0
    units = "h"
    activation = 1
    proc = "/proc"
    locking_type = 3
    fallback_to_clustered_locking = 1
    fallback_to_local_locking = 1
    locking_dir = "/var/run/lvm/lock"
}
activation {
    missing_stripe_filler = "/dev/ioerror"
    reserved_stack = 256
    reserved_memory = 8192
    process_priority = -18
    mirror_region_size = 512
    readahead = "auto"
    mirror_log_fault_policy = "allocate"
    mirror_device_fault_policy = "remove"

    udev_rules = 1
    udev_sync = 1
}
dmeventd {
    mirror_library = "libdevmapper-event-lvm2mirror.so"
    snapshot_library = "libdevmapper-event-lvm2snapshot.so"
}

什么赋予了什么?我的VG去了哪里,我该如何取回它?

解决方法

A document in the Novell knowledge base似乎适用于此:它解释了在SLES上,LVM默认不扫描多路径设备,因此在这种情况下永远不会看到它们.

解决此问题,您可以实施Novell提供的解决方法

在devices部分的/etc/lvm.conf中,将过滤器更改为:

filter = [ "a|/dev/sda.*|","a|/dev/disk/by-id/dm-uuid-.*mpath-.*|","r|.*|"]

(这适用于SLES 11.对于其他版本,请参阅链接的知识库文章.)

猜你在找的Linux相关文章