Network Configurations for Simple Workstation Clusters
There are four simple ways to integrate a DSB Scientific Workstation Cluster into existing networks. Since each node as dual Gigabit Ethernet (GbE) capability, the individual node hardware is the same (except for KVM). The Headless, Workstation Cluster and Network of Workstations with Dedicated Link will require a GbE switch for more than two nodes.
In the following diagrams, KVM = keyboard(s), mouse(mice) and monitor(s), GbE = Gigabit Ethernet and
- Headless Cluster
None of the cluster nodes have user i/o devices (KVM); access to the system is via a terminal client (ssh), VNC or a browser on another workstation system. The front-end node (sometimes called "master" or "server") is the gateway to the system; none of the remaining nodes connect directly to the existing network. Generally, users don't access the back-end ("slave" or "client") nodes directly.
This configuration is useful if you want a pure computational cluster with all user interactive tasks done on existing workstations. It is relatively inexpensive since there are no KVM's. On the down side, if the front-end existing LAN link fails, the system will be inaccessible until a KVM is connected.
The DSB Scientific system prices listed on the web site are for headless systems.
- Workstation Cluster
In this configuration, the front-end node does have a KVM, but the remaining cluster nodes do not. Like the headless configuration (above), the front-end is the only node connected to the existing network. Alternatively, this configuration can be used as a stand-alone system not connected to an existing office/lab network. The necessity of one KVM slightly increases the cost of a cluster with this configuration compared to a headless system.
Though users can access the front-end via ssh or VNC, this system should be considered if there is one user at a time assigned to use the cluster.
- Network of Workstations with Dedicated Link
This double-duty cluster has a KVM on each node and each node is available for use as a workstation. However, note that the dedicated link exists, independent of the normal LAN. Therefore, compute nodes have a dedicated communication path. This separates internode communication traffic from regular network traffic on the office/lab LAN (email, file transfers, etc).
Consider this configuration if you are adding office workstations and you want a computational cluster, too. Since each node is a dual-core, dual memory channel system, computations can be assigned in the background to one processor core (with its dedicated memory) leaving the other processor to the user for interactive tasks. During off-hours, or when the 'user' cores are not needed, the both processor cores on all nodes can be dedicated to calculation.
Such a workstation cluster will require considerable cooperation from the users, and may end up a little harder to administor than the simpler systems listed previously. In addition, a KVM is needed for all nodes, which increases the cost of the system (decreasing the compute power/cost ratio).
- Simple Network of Workstations
Another double-duty parallel computing environment, this configuration is almost identical to the Network of Workstations (with dedicated link) listed previously. Again, a KVM is needed for each node, and again this is a useful arrangement if additional staff workstations are needed anyway.
The cost is slightly lower than the Network of Workstations (with dedicated link) since no Gigabit Ethernet switch is used, but the cluster will compete with office traffic on the LAN. As a result, collision probability (and therefore network latency) is greater, lowering overall communication performance of the cluster.
Consider this option over the dedicated link version of the Network of Workstations only if one or more of the following is true:
- you know your existing LAN traffic is very small when parallel calculations are performed
- you can sacrifice communication performance, such as with 'embarrassingly parallel' problems
- you plan to run only serial jobs on individual processors
View Node System Details
Return to Workstation Cluster Page