For some reason, I am using a special but not rare configuration for our OSCAR clusters like this:
The reasons I designed it that way were quite simple.
- I want larger bandwidth for each computing node.
- Because I can.
Here let me write done the configuration for someone who might find it interesting. First of all, my linux server (head node) has 6 GbE network ports and I already used the first one connecting to the Internet. I just needed to find the way to bind these 5 ethernet ports to the same ip and share the bandwidth load. In Fedora Core 2/3 Linux, it's very easy to do. Just set the members of the ethernet bridge to ONBOOT=yes, IPADDR=0.0.0.0 and BRIDGE=br0 (this br0 is an virtual device for bridge.) In the mean time, the ifcfg-br0 setting would be like this:
DEVICE=br0 TYPE=Bridge BOOTPROTO=static IPADDR=10.0.0.254 NETMASK=255.255.255.0 ONBOOT=yes DELAY=0 STP=onFor more information, check out the FAQ of ethernet bridge. Now I presumed the readers of this nonsense blog entry already have installed oscar in /opt/oscar, but not yet started "./install_cluster br0", you can modify the script /opt/oscar/packages/pfilter/scripts like this: (if you like the patch file more ... )
--- post_clients.orig 2005-08-23 20:35:18.000000000 -0700 +++ post_clients 2005-08-23 20:34:44.000000000 -0700 @@ -176,7 +176,7 @@ # the server and every compute node trust each other -trusted %oscar_server% %nodes% +trusted %oscar_server% %nodes% $on_interface open multicast # for ganglia #Or, if you already did "./install_cluster br0", just modify /etc/pfilter.conf and add br0 in the line of "
trusted %oscar_server% %nodes%
" and then issue "serivce pfilter restart
". That's it, your computing nodes can connect through the bridge interface now.2005-08-23 03:55:13
No comments:
Post a Comment