| Forum Home | ||||
| Press F1 | ||||
| Thread ID: 112339 | 2010-09-01 22:35:00 | Connecting two servers via their second NIC's | nofam (9009) | Press F1 |
| Post ID | Timestamp | Content | User | ||
| 1133760 | 2010-09-01 22:35:00 | Hi All, I have a new VMware host server which I need to connect via a gigabit switch to an existing Server 2003 box so we can virtualise the latter. Both servers are already on the LAN via the backbone megabit switch. What Id like to know is whether I can also connect both servers to the gigabit switch via their 2nd NICs, and then route the VM conversion trafficthrough the GB connection, rather than through NIC 1 which is MB. I realise it would just be simpler to move both primary connections from the MB to the GB switch, but its a loaner, and would mean downtime for users while I unplugged back and forward from one to the other. |
nofam (9009) | ||
| 1133761 | 2010-09-02 01:41:00 | Should be straightforward enough. You may want to use static IP's for the secondary NICs on a different subnet & then they won't interfere or share with anything on the MB network. Then you just add routes at each end so that traffic for the GB NIC is automatically routed to the other GB NIC. If you just plug in the secondary NICs to the switch with dynamic IP it will probably also work, but you may find that the dynamic metric assigned to the GB connection will make all the traffic between the two machines (& anywhere else) go through the GB link & not use the MB one at all. Actually, re-reading the question, I suspect you'll have to use static IP if these are the only 2 machines on the GB switch, as the GB NICs won't be able to see your DHCP server. As long as you select a different subnet from the MB switch side, you probably won't have to worry about the routing - that will be automatically set up with the IP config, right? |
MushHead (10626) | ||
| 1133762 | 2010-09-02 03:25:00 | Mate, you wont need to worry about traffic, conversion from P2V is a very slow process, at least 1 hr for each 10GB of space the impact on the LAN will be very little. Are you doing ESX\i or hosted type 2? |
SolMiester (139) | ||
| 1133763 | 2010-09-02 07:14:00 | Thanks MH - that's exactly what we ended up doing!! :thumbs: Sol - Server is running vSphere; tech who is doing the p2v conversion seems to think that the copy process should only take 2 - 3 hours . . . . O/S array is around 22Gb, and User array around 120Gb!! Just about to kick-off, so will let you know how it ends up!! :D |
nofam (9009) | ||
| 1133764 | 2010-09-02 10:46:00 | Thanks MH - that's exactly what we ended up doing!! :thumbs: Sol - Server is running vSphere; tech who is doing the p2v conversion seems to think that the copy process should only take 2 - 3 hours . . . . O/S array is around 22Gb, and User array around 120Gb!! Just about to kick-off, so will let you know how it ends up!! :D You're bang-on Sol - just finished the O/S array at 9:15 . . . . ETA for completion is around 3:30am . . . . . . going to be a long day tomorrow!! :p |
nofam (9009) | ||
| 1133765 | 2010-09-05 21:41:00 | Bump, how did you get on nofam?....You should be excited with a new vSphere install, ESX rocks. Are you using local or shared storage, how many hosts?, Are you using vCenter to look after them? |
SolMiester (139) | ||
| 1133766 | 2010-09-06 21:27:00 | Another bump for nofam...what are you using to backup your VM's? I just put a 2TB SATA on a promise controller yesterday to snapshot VMs to Sata drive and then to tape....performance is much better than before which was to a NAS box.....about 25MB/s....I use Trilead VM Explorer |
SolMiester (139) | ||
| 1133767 | 2010-09-07 04:05:00 | Bump, how did you get on nofam? . . . . You should be excited with a new vSphere install, ESX rocks . Are you using local or shared storage, how many hosts?, Are you using vCenter to look after them? P2V was actually done in around 6 hours - I think the estimated time was based on the total capacity of the drive arrays, rather than used . Host is an ML350 G6, with Dual E5520's (16 logical CPU's!! :D), and 18Gb RAM . I'm using the vSphere management client to access it . Very impressed with this app - it just makes everything so easy!! There are 4 guest VM's running the host: - Terminal Server/BDC (this was the one we p2v'd) - DB Server/PDC - Exchange Server 2010 - APC management console for PowerChute Another bump for nofam . . . what are you using to backup your VM's? I just put a 2TB SATA on a promise controller yesterday to snapshot VMs to Sata drive and then to tape . . . . performance is much better than before which was to a NAS box . . . . . about 25MB/s . . . . I use Trilead VM Explorer Funny timing on your post Sol - just learning about the intricacies of connecting SCSI tape drives to a host - rebooted last night with the drive attached - SCSI card saw the drive (Tandberg LTO-3 HH), and vSphere assigned a VMHBA to it, but it showed the path status as Dead . Apparently it's common practice to kick the VM's over to a workstation as you describe above, or use iSCSI etc . Even if we get it working, I'm skeptical about how fast the write speeds will be . I have a resolution to the above issue I'm going to try tonight, so if it works, will post it here for future reference . Have you come across this kind of thing before? But all in all, very happy - management didn't even know we'd done the p2v conversion!! |
nofam (9009) | ||
| 1133768 | 2010-09-07 04:45:00 | P2V was actually done in around 6 hours - I think the estimated time was based on the total capacity of the drive arrays, rather than used. Host is an ML350 G6, with Dual E5520's (16 logical CPU's!! :D), and 18Gb RAM. I'm using the vSphere management client to access it. Very impressed with this app - it just makes everything so easy!! There are 4 guest VM's running the host: - Terminal Server/BDC (this was the one we p2v'd) - DB Server/PDC - Exchange Server 2010 - APC management console for PowerChute Funny timing on your post Sol - just learning about the intricacies of connecting SCSI tape drives to a host - rebooted last night with the drive attached - SCSI card saw the drive (Tandberg LTO-3 HH), and vSphere assigned a VMHBA to it, but it showed the path status as Dead. Apparently it's common practice to kick the VM's over to a workstation as you describe above, or use iSCSI etc. Even if we get it working, I'm skeptical about how fast the write speeds will be. I have a resolution to the above issue I'm going to try tonight, so if it works, will post it here for future reference. Have you come across this kind of thing before? But all in all, very happy - management didn't even know we'd done the p2v conversion!! Couple of points to note:....you have a Terminal Server which is also a DC?...naughty naughty, demote the TS, create a new BDC....You should have a physical PDC and put the LTO3 on that! Exh 2010 is going to eat your storage unless you have some decent GPO's in place for archiving and mailbox size. Use free Trilead to snapshot running VM's to SATA then to teh LTO3 for off site. |
SolMiester (139) | ||
| 1133769 | 2010-09-07 05:18:00 | Couple of points to note: . . . . you have a Terminal Server which is also a DC? . . . naughty naughty, demote the TS, create a new BDC . . . . You should have a physical PDC and put the LTO3 on that! Exh 2010 is going to eat your storage unless you have some decent GPO's in place for archiving and mailbox size . Use free Trilead to snapshot running VM's to SATA then to teh LTO3 for off site . Yeah I know :p - it's always been that way though and hasn't ever caused an issue . Not too worried about Exchange - current estimate is that all users' . pst files are around 70Gb - Exchange VM has 268Gb allocated . But yes, will be quite strict about mailbox size - too many users (a lot of whom should know better!) are completely slack about housekeeping their e-mail . Will see how the direct-attached backup performs . . . . If we have to add another box, then so be it . |
nofam (9009) | ||
| 1 2 | |||||