Wednesday, May 30, 2012

Configuring NFS VMDK Drives Is A “Snap” With SnapManager For SQL

 

As of version 5.1, SnapManager for SQL supports the use of VMware VMDK files on both NFS and fiber channel storage configurations. Configuration of SnapManger for SQL in these environments is not for the faint of heart. The installation guide is geared toward physical servers leveraging fiber channel LUNs and virtual machines using RDM (Raw Disk Mapping) LUNs. While the documentation indicates that VMDK files are supported, configuration is not automatic.
The first thing I recommend you start with is to download and read through the following Install/Admin guides from NetApp (downloading these files requires a NetApp NOW support site login). There are a lot of prerequisites and they vary from version to version. I will cover the current release (5.2):
  • SnapManager 5.2 for Microsoft® SQL Server – Installation and Administration Guide
  • SnapManager 5.1 for Microsoft® SQL Server – Installation and Administration Guide
  • SnapDrive 6.4 for Windows – Installation and Administration Guide
  • SnapDrive 6.4 for Windows – Release Notes
  • NetApp Virtual Storage Console 2.1.1 for VMware vSphereTM
 – Installation and Administration Guide
Once you are armed with the latest documentation go ahead and download the following files from Netapp (again: these files require a NOW account and may require entitlement access, based on the software and support purchased with your NetApp system). Also, it should be noted that some or all of these software revisions are currently FCS (First Customer Shipment) vs GA (General Availability) releases. This means they are fully tested and meet NetApp’s criteria for production deployment, but they recommend using the GA release unless a specific feature in the FCS version is required. Files for download:
  • Virtual Storage Console – VSC-2.1.2-win64.exe
  • SnapDrive for Windows – SnapDrive6.4_x64.exe
  • SnapManager for SQL – SMSQL5.2_x64.exe
With the installers downloaded, install VSC 2.1.2 on the VMware Virtual Center server/virtual appliance. It is important that the VSC service is local to the VC for the SnapDrive component to function correctly. Also when registering the plugin use the IP address of the VC server in the VSC plugin source field and the VC server field (as opposed to using “localhost” or 127.0.0.1). It is also important to note that if you are using a web proxy server in your environment you need to put a “proxy bypass” entry in your default browser configuration in order for the plugin to work properly.

Once the VSC is installed, ensure that the VMTools are updated on the guest, and that all required Windows updates are installed (there are a few required by SnapDrive and they vary by OS version). SnapDrive can now be installed and registered with the VSC. Here is where the documentation for SnapManager and SnapDrive get murky. The SnapManager for SQL 5.2 manual indicates that you need to install SnapDrive for Windows on the virtual machine and enter the IP address of the management server and credentials used to communicate with VSC on the management server. Whereas the SnapManager for SQL 5.1 manual indicates that you need to install SnapDrive for Windows on the virtual machine and enter the IP address of the management server and credentials used to communicate with SMVI on the management server.
The difference and key clue here is that 5.2 requires the VSC and 5.1 required SMVI. Digging through the documentation for Snapdrive did not provide much clarity, but a search through NetApp’s communities provided this link. The solution was to run the following command from the guest once SnapDrive is installed:
                        
                           sdcli smvi_config set –host VSC Server IP

Once this is done: the SnapDrive service is restarted and the disk configuration in SnapDrive is refreshed, all the VMDK volumes should be listed under drives. To finish up, when the drives are visible in SnapDrive, install SnapManager and run the configuration wizard to setup the SQL databases.

Protecting Exchange 2010 with EMC RecoverPoint and Replication Manager


 

Regular database backups of Microsoft Exchange environments are critical to maintaining the health and stability of the databases. Performing full backups of Exchange provides a database integrity checkpoint and commits transaction logs. There are many tools which can be leveraged to protect Microsoft Exchange environments, but one of the key challenges with traditional backups is the length of time that it takes to back up prior to committing the transaction logs.

Additionally, the database integrity should always be checked prior to backing up: to ensure the data being backed up is valid. This extended time often can interfere with daily activities – so it usually must be scheduled around other maintenance activities, such as daily defragmentation. What if you could eliminate the backup window time?

EMC RecoverPoint in conjunction with EMC Replication Manager can create application consistent replicas with next to zero impact, that can be used for staging to tape, direct recovery, or object level recovery with Recovery Storage Groups or third party applications. These replicas leverage Microsoft VSS technology to freeze the database, RecoverPoint bookmark technology to mark the image  time in the journal volume, and then thaw the database in a matter of less then thirty seconds – often in less than five seconds.

EMC Replication Manager is aware of all of the database server roles in the Microsoft Exchange 2010 Database Availability Group (DAG) infrastructure and can leverage any of the members (Primary, Local
 Replica, or Remote Replica) to be a replication source.

EMC Replication Manager automatically mounts the bookmarked replica images to a mount host running the Microsoft Exchange tools role and the EMC Replication Manager agent. The database and transaction logs are then verified using the essentials utility provided with the Microsoft Exchange tools. This ensures that the replica is a valid, recoverable copy of the database. The validation of the databases can take from a few minutes to several hours, depending on the number and size of databases and transaction log files. The key is: the load from this process does not impact the production database servers. Once the verification completes,
EMC Replication Manager calls back to the production database to commit and delete the transaction logs.

Once the Microsoft Exchange database and transaction logs are validated, the files can be spun off to tape from the mount host, or depending on the retention requirement – you could eliminate tape backups of the Microsoft Exchange environment completely. Depending on the write load on the Microsoft Exchange server and how large the journal volumes for RecoverPoint are, you can maintain days or even weeks of retention/recovery images in a fairly small footprint – as compared to disk or tape based backup.

There are a number of recovery scenarios that are available from a solution based on RecoverPoint and Replication Manager. The images can be reversed synchronized to the source – this is a fast delta-based copy, but is data destructive. Alternatively, the database files could be copied from the mount host to a new drive and mounted as a recovery storage group on the Microsoft Exchange server. The database and log files can also be opened on the mount host directly with tools such as Kroll OnTrack for mailbox and message-level recovery.

Networking and The Importance Of VLANs [VMware EMC]


 

We have become familiar with the term VLANs when talking about networking. Some people cringe and worry when they hear “VLAN”, while others rejoice and relish the idea. I used to be in the camp that cringed and worried - only because I did not have some basic knowledge about VLANs.

              "So let’s start with the basics: what is a VLAN?"

VLAN stands for Virtual Local Area Network and has the same characteristics and attributes as a physical Local Area Network (LAN). A VLAN is a separate IP sub-network which allows for multiple networks and subnets to reside on the same switched network – services that are typically provided by routers. A VLAN essentially becomes its own broadcast domain. VLANs can be structured by department, function, or protocol, allowing for a smaller layer of granularity. VLANs are defined on the switch by individual ports; this allows VLANs to be placed on specific ports to restrict access.
A VLAN cannot communicate directly with another VLAN, which is done by design. If VLANs are required to communicate with one another the use of a router or layer 3 switching is required. VLANs are capable of spanning multiple switches and you can have more than one VLAN on multiple switches. For the most part VLANs are relatively easy to create and manage. Most switches allow for VLAN creation via Telnet and GUI interfaces, which is becoming increasingly popular.
VLAN’s can address many issues such as:
  1. Security – Security is an important function of VLANs. A VLAN will separate data that could be sensitive from the general network.  Thus allowing sensitive or confidential data to traverse the network decreasing the change that users will gain access to data that they are not authorized to see. Example: An HR Dept.’s computers/nodes can be placed in one VLAN and an Accounting Dept.’s can be place in another allowing this traffic to completely separate. This same principle can be applied to protocol such as NFS, CIFS, replication, VMware (vMotion) and management.
  2. Cost – Cost savings can be seen by eliminating the need for additional expensive network equipment. VLANs will also allow the network to work more efficiently and command better use of bandwidth and resources.
  3. Performance – Splitting up a switch into VLANs allows for multiple broadcast domains which reduces unnecessary traffic on the network and increases network performance.
  4. Management: VLANs allow for flexibility with the current infrastructure and for simplified administration of multiple network segments within one switching environment.
VLANs are a great resource and tool to assist in fine tuning your network. Don’t be afraid of VLANs, rather embrace them for the many benefits that they can bring to your infrastructure.

How To: Replicating VMware NFS Datastores With VNX Replicator


 
To follow up on my last blog regarding NFS Datastores, I will be addressing how to replicate VMware NFS Datastores with VNX replicator. Because NFS Datastores exist on VNX file systems, the NFS Datastores are able to replicate to an off-site VNX over a WAN. 
Leveraging VNX replicator allows you to use your existing WAN link to sync file systems with other VNX arrays. VNX only requires you to enable the Replication license of an offsite VNX and the use of your existing WAN link. There is no additional hardware other then the replicating VNX arrays and the WAN link.
VNX Replicator leverages checkpoints (snapshots) to record any changes made to the file systems. Once there are changes made to the FS the replication checkpoints initiates writes to the target keeping the FS in sync.
Leveraging Replicator with VMware NFS DS will create a highly available virtual environment that will keep your NFS DS in sync and available remotely for whenever needed. VNX replicator will allow a maximum of ten minutes of “out-of-sync” time. Depending on WAN bandwidth and availability, your NFS DS can be restored ten minutes from the point of failure.
The actual NFS failover process can be very time consuming: once you initiate the failover process you will still have to mount the DS to the target virtual environment and add each VM into the inventory. When you finally have all of the VMs loaded, next you must configure the networking.
Fortunately VMware Site Recovery Manager SRM has a plug-in which will automate the entire process. Once you have configured the policies for failover, SRM will mount all the NFS stores and bring the virtual environment online. These are just a few features of VNX replicator that can integrate with your systems

Leveraging EMC,VNX, & NFS To Work For Your VMware Environments #increasestoragecapacity

 
Storage Benefits
NFS (Network File System) is native to UNIX and Linux file systems. Because the NFS protocol is native to UNIX and Linux, it allows the file system to be provisioned thin instead of thick, with ISCSI or fiber channel. Provisioning LUN’s or datastores thin, allows the end user to efficiently manage their NAS capacity. Users have reported a 50% increase in both capacity and usable space. 
Creating NFS datastores is a lot easier to attach to hosts than FC or ISCSI. There is no usage of HBA’s or fiber channel fabric, and all that needs to be created is a VMkernel for networking. NAS and SAN capacity can quickly become scarce if the end user can’t control the amount of storage being used, or if there are VM’s with over provisioned VMDK’s. NFS file systems can also be deduplicated, and not only are user’s saving space via thin provisioning, the VNX can track similar data and store only the changes to the file system.
EMC and VMware’s best practice is to use deduplication on NFS exports which house ISO’s, templates and other miscellaneous tools and applications. Enabling deduplication on file systems which house VMDK’s is not a best practice due to the fact that the VMDK’s will not compress. Automatic volume manager can also stripe the NFS volumes across multiple RAID groups (assuming their array was purchased with more than just 6 drives). This increases the I/O performance of the file system and VM. Along with AVM extending the datastores, this makes the file system transparent and beneficial to VMware (assuming you are adding drive to the file system). AVM will extend the file system to the next available empty volume, meaning if you add drives to the file systems you will be increasing the performance of your virtual machines.
Availability Benefits
Using VNX, Snapsure snapshots can be taken of the NFS snapshots and mounted anywhere in both physical and virtual environments. NFS Snapshots will allow you to mount production datastores in your virtual environment to use them for testing VM’s without affecting production data. Leveraging SnapSure will allow the end-user to keep up with certain RTO and RPO objectives. SnapSure can create 96 checkpoints and 16 writable snapshots per file system. Not to mention the ease of use Snapsure has over SnapView. Snapsure is configured at the file system level, just right-click the file system, select how many snapshots you need, add a schedule and you’re finished.
From my experience in the field the end-user finds this process much easier than SnapView or replication manager. Using VNX, NFS will also enable the user to replicate the file system to an offsite NS4-XXX without adding any additional networking hardware. VNX Replicator allows the user to mount file systems on other sites without affecting production machines. Users can replicate up to 1024 file systems, and 256 active sessions.
Networking Benefits
VNX datamovers can be purchased with 1 GB/s or 10 GB/s NICs. Depending on your existing infrastructure, the VNX can leverage LACP or ether channel trunks to increase the bandwidth and availability of your NFS file systems. LACP trunks enable the datamover to monitor and proactively reroute traffic from all available NIC’s in the Fail Safe Network, therefore increasing storage availability. It has been my experience interacting with customers who are leveraging 10GB on NFS, that they have seen a huge improvement in R/RW to disk and storage, as well as VMotion from datastore to datastore with up to 100% bandwidth and throughput.

Tech For Dummies: Cisco MDS 9100 Series Zoning & EMC VNX Host Add A “How To” Guide" By: Eli Mercado


Before we begin zoning please make sure you have cabled each HBA to both switches assuring the host is connected to each switch. Now let’s get started …
Configuring and Enabling Ports with Cisco Device Manager:
Once your HBAs are connected we must first Enable and Configure the ports.
1. Open Cisco Device Manager to enable port:


2. Type in the IP address, username and password of the first switch:
 
3. Right-click the port you attached FC cable to and select enable:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-3.jpg
Cisco allows the usage of multiple VSANs (Virtual Storage Area Network). If you have created a VSAN other than VSAN 1 you must configure the port for the VSAN you created.
1. To do this, right-click the port you enabled and select “Configure”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-4.jpg
2. When the following screen appears, click on Port VSAN and select your VSAN, then click “Apply”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-5.jpg
3. Save your configuration by clicking on “Admin” and selecting “Save Configuration”, once the “Save Configuration” screen pops up and requests you to select “Yes”:
http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-6.jpg
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-7.jpg
 Once you have enabled and configured the ports, we can now zone your Hosts HBAs to the SAN.
Login to Cisco Fabric Manager:
1. Let’s begin by opening Cisco Fabric Manager:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-8.jpg
 2. Enter FM server username and password (EMC Default admin; password) , then clock “Login”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-9-.jpg
3. Highlight the switch you intend to zone and select “Open”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-10.jpg
4. Expand the switch and right-click “VLAN”, then select “Edit Local Full Zone Database”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-11.jpg
Creating An FC Alias:
In order to properly manage your zones and HBAs, it is important to create an “FC Alias” for the WWN of each HBA. The following screen will appear:
1. When it does, right-click “FC-Aliases” and select “Insert”, once selected the next screen will appear. Type in the name of the host and HBA ID, example: SQL_HBAO. Click the down arrow and then select the WWN that corresponds to your server, finally click “OK”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-12.jpg

 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-12.jpg
Creating Zones:
Now that we have created FC-Aliases, we can now move forward creating zones. Zones are what isolates connectivity among HBAs and targets. Let’s begin creating zones by:
1. Right-clicking on “Zones”.
2. Select “Insert” from the drop down menu. A new screen will appear.
3. Type in the name of the “Zone”, for management purposes use the following format _ Example: SQL01_HBAO_VNX_SPAO.
4. Click “Ohttp://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-13.jpgK”:
  Note: These steps must be repeated to zone the hosts HBA to the second storage controller. In our case, VNX_SPB1.
 Adding Members to Zones:
Once the Zones names are created, insert the aliases into the Zones:
5. Right-click on the Zone you created.
6. Select “Insert”, and a new screen will appear.
7. Select “FC-Alias”, click on “…” box then select Host FC Alias.
8. Select the target FC Alias, click “OK”, and click “Add”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-14.jpg
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-15.jpg
Creating Storage Groups:
Now that we have zoned the HBAs to the array, we can allocate storage to your hosts. To do this we must create “Storage Groups”, which will give access to LUNs in the array to the hosts connected to that array. Let’s begin by logging into the array and creating “Storage Groups”:
1. Login to Unisphere and select the array from the dashboard:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-16.jpg
2. Select “Storage Groups” under the Hosts tab:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-17.jpg
3. Click “Create” to create a new storage group:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-18.jpg
4. The following screen will appear, type in the name of the storage group. Typically you will want to use the name of the application or hosts cluster name.
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-19.jpg
5. The screen below will pop up, at this time click “Yes” to continue and add LUNs and Hosts to the Storage Group:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-20.jpg
6. The next screen will allow you to select wither newly created LUNs or LUNs that already exist in other Storage Groups. Once you add the LUN or LUNs to the group, click on the hosts tab to continue to add hosts:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-21.jpg
7. In the hosts tab, select the Hosts we previously zoned and click on the forward arrow. Once the host appears in the right pane, click OK:  
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-22.jpg
8. At this phttp://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-23.jpgoint a new screen will pop up, click YES to commit.

Once you have completed these tasks successfully, your hosts will see new raw devices. From this point on, use your OS partitioning tool to create volumes.