linux – RHEL 6.4:模式1通道绑定没有失败

前端之家收集整理的这篇文章主要介绍了linux – RHEL 6.4:模式1通道绑定没有失败前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。
我在带有两个板载Broadcom NetXtreme II BCM5708 1000Base-T网卡的HP ML 350 G5上运行RHEL 6.4,kernel-2.6.32-358.el6.i686.我的目标是将两个接口绑定到mode = 1故障转移对.

我的问题是,尽管所有证据表明绑定已经建立并被接受,但是将电缆拉出主NIC会导致所有通信停止.

ifcfg-etho和ifcfg-eth1

首先,ifcfg-eth0:

DEVICE=eth0
HWADDR=00:22:64:F8:EF:60
TYPE=Ethernet
UUID=99ea681d-831b-42a7-81be-02f71d1f7aa0
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

接下来,ifcfg-eth1:

DEVICE=eth1
HWADDR=00:22:64:F8:EF:62
TYPE=Ethernet
UUID=92d46872-eb4a-4eef-bea5-825e914a5ad6
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=none
MASTER=bond0
SLAVE=yes

的ifcfg-bond0

我的债券的配置文件

DEVICE=bond0
IPADDR=192.168.11.222
GATEWAY=192.168.11.1
NETMASK=255.255.255.0
DNS1=192.168.11.1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
BONDING_OPTS="mode=1 miimmon=100"

/etc/modprobe.d/bonding.conf

我有一个这样填充的/etc/modprobe.d/bonding.conf文件

alias bond0 bonding

ip addr输出

债券已经结束,我可以通过债券的IP地址访问服务器的公共服务:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
    link/ether 00:22:64:f8:ef:60 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
    link/ether 00:22:64:f8:ef:60 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MASTER,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 00:22:64:f8:ef:60 brd ff:ff:ff:ff:ff:ff
    inet 192.168.11.222/24 brd 192.168.11.255 scope global bond0
    inet6 fe80::222:64ff:fef8:ef60/64 scope link 
       valid_lft forever preferred_lft forever

绑定内核模块

…已加载:

# cat /proc/modules | grep bond
bonding 111135 0 - Live 0xf9cdc000

/ SYS /类/净

/ sys / class / net文件系统显示了很好的东西:

cat /sys/class/net/bonding_masters 
bond0
cat /sys/class/net/bond0/operstate 
up
cat /sys/class/net/bond0/slave_eth0/operstate 
up
cat /sys/class/net/bond0/slave_eth1/operstate 
up
cat /sys/class/net/bond0/type 
1

在/ var / log / messages中

日志文件中不会出现任何问题.事实上,一切看起来都很开心.

Jun 15 15:47:28 rhsandBox2 kernel: Ethernet Channel Bonding Driver: v3.6.0 (September 26,2009)
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: setting mode to active-backup (1).
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: Adding slave eth0.
Jun 15 15:47:28 rhsandBox2 kernel: bnx2 0000:03:00.0: eth0: using MSI
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: making interface eth0 the new active one.
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: first active interface up!
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: enslaving eth0 as an active interface with an up link.
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: Adding slave eth1.
Jun 15 15:47:28 rhsandBox2 kernel: bnx2 0000:05:00.0: eth1: using MSI
Jun 15 15:47:28 rhsandBox2 kernel: bonding: bond0: enslaving eth1 as a backup interface with an up link.
Jun 15 15:47:28 rhsandBox2 kernel: 8021q: adding VLAN 0 to HW filter on device bond0
Jun 15 15:47:28 rhsandBox2 kernel: bnx2 0000:03:00.0: eth0: NIC Copper Link is Up,1000 Mbps full duplex
Jun 15 15:47:28 rhsandBox2 kernel: bnx2 0000:05:00.0: eth1: NIC Copper Link is Up,1000 Mbps full duplex

所以有什么问题?!

从eth0缓冲网络电缆会导致所有通信变暗.问题是什么以及我应采取哪些进一步措施来解决这个问题?

编辑:

进一步故障

网络是单个子网,单个VLAN由ProCurve 1800-8G交换机提供.我已将primary = eth0添加到ifcfg-bond0并重新启动网络服务,但这并没有改变任何行为.我在添加primary = eth1之前和之后检查了/ sys / class / net / bond0 / bonding / primary,它有一个空值,我不确定它是好是坏.

当eth1删除其电缆时,拖尾/ var / log / messages只显示

Jun 15 16:51:16 rhsandBox2 kernel: bnx2 0000:03:00.0: eth0: NIC Copper Link is Down
Jun 15 16:51:24 rhsandBox2 kernel: bnx2 0000:03:00.0: eth0: NIC Copper Link is Up,1000 Mbps full duplex

我将use_carrier = 0添加到ifcfg-bond0的BONDING_OPTS部分以启用MII / ETHTOOL ioctls.重新启动网络服务后,症状没有变化.从eth0拉出电缆会导致所有网络通信停止.同样,/ var / log / messages中没有错误保存该端口上的链接断开的通知.

解决方法

读.您的. CONFIGS.

当失败时……

读.所有.输出.

你看到ifcfg-bond0中有什么?不,你明白ifcfg-bond0中有什么?
滑溜溜的企鹅世界里有什么是miimmon = 100?
哦,对不起,你的意思是miimon = 100?

是的,我认为你的意思是miimon而不是miimmon.

此外,一个很大的赠品是,当您重新启动网络服务时,您会看到:

service network restart
Shutting down interface bond0:                             [  OK  ]
Shutting down loopback interface:                          [  OK  ]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface bond0:  ./network-functions: line 446: /sys/class/net/bond0/bonding/miimmon: No such file or directory
./network-functions: line 446: /sys/class/net/bond0/bonding/miimmon: No such file or directory
                                                           [  OK  ]

仔细注意你输入的所有内容,当你犯下不可避免的打字错误时,要特别注意你看到的每一个输出.

你是一个坏人,你应该感觉不好.

原文链接:https://www.f2er.com/linux/400958.html

猜你在找的Linux相关文章