Showing posts with label EMC. Show all posts
Showing posts with label EMC. Show all posts

Wednesday, May 30, 2012

Protecting Exchange 2010 with EMC RecoverPoint and Replication Manager


 

Regular database backups of Microsoft Exchange environments are critical to maintaining the health and stability of the databases. Performing full backups of Exchange provides a database integrity checkpoint and commits transaction logs. There are many tools which can be leveraged to protect Microsoft Exchange environments, but one of the key challenges with traditional backups is the length of time that it takes to back up prior to committing the transaction logs.

Additionally, the database integrity should always be checked prior to backing up: to ensure the data being backed up is valid. This extended time often can interfere with daily activities – so it usually must be scheduled around other maintenance activities, such as daily defragmentation. What if you could eliminate the backup window time?

EMC RecoverPoint in conjunction with EMC Replication Manager can create application consistent replicas with next to zero impact, that can be used for staging to tape, direct recovery, or object level recovery with Recovery Storage Groups or third party applications. These replicas leverage Microsoft VSS technology to freeze the database, RecoverPoint bookmark technology to mark the image  time in the journal volume, and then thaw the database in a matter of less then thirty seconds – often in less than five seconds.

EMC Replication Manager is aware of all of the database server roles in the Microsoft Exchange 2010 Database Availability Group (DAG) infrastructure and can leverage any of the members (Primary, Local
 Replica, or Remote Replica) to be a replication source.

EMC Replication Manager automatically mounts the bookmarked replica images to a mount host running the Microsoft Exchange tools role and the EMC Replication Manager agent. The database and transaction logs are then verified using the essentials utility provided with the Microsoft Exchange tools. This ensures that the replica is a valid, recoverable copy of the database. The validation of the databases can take from a few minutes to several hours, depending on the number and size of databases and transaction log files. The key is: the load from this process does not impact the production database servers. Once the verification completes,
EMC Replication Manager calls back to the production database to commit and delete the transaction logs.

Once the Microsoft Exchange database and transaction logs are validated, the files can be spun off to tape from the mount host, or depending on the retention requirement – you could eliminate tape backups of the Microsoft Exchange environment completely. Depending on the write load on the Microsoft Exchange server and how large the journal volumes for RecoverPoint are, you can maintain days or even weeks of retention/recovery images in a fairly small footprint – as compared to disk or tape based backup.

There are a number of recovery scenarios that are available from a solution based on RecoverPoint and Replication Manager. The images can be reversed synchronized to the source – this is a fast delta-based copy, but is data destructive. Alternatively, the database files could be copied from the mount host to a new drive and mounted as a recovery storage group on the Microsoft Exchange server. The database and log files can also be opened on the mount host directly with tools such as Kroll OnTrack for mailbox and message-level recovery.

Leveraging EMC,VNX, & NFS To Work For Your VMware Environments #increasestoragecapacity

 
Storage Benefits
NFS (Network File System) is native to UNIX and Linux file systems. Because the NFS protocol is native to UNIX and Linux, it allows the file system to be provisioned thin instead of thick, with ISCSI or fiber channel. Provisioning LUN’s or datastores thin, allows the end user to efficiently manage their NAS capacity. Users have reported a 50% increase in both capacity and usable space. 
Creating NFS datastores is a lot easier to attach to hosts than FC or ISCSI. There is no usage of HBA’s or fiber channel fabric, and all that needs to be created is a VMkernel for networking. NAS and SAN capacity can quickly become scarce if the end user can’t control the amount of storage being used, or if there are VM’s with over provisioned VMDK’s. NFS file systems can also be deduplicated, and not only are user’s saving space via thin provisioning, the VNX can track similar data and store only the changes to the file system.
EMC and VMware’s best practice is to use deduplication on NFS exports which house ISO’s, templates and other miscellaneous tools and applications. Enabling deduplication on file systems which house VMDK’s is not a best practice due to the fact that the VMDK’s will not compress. Automatic volume manager can also stripe the NFS volumes across multiple RAID groups (assuming their array was purchased with more than just 6 drives). This increases the I/O performance of the file system and VM. Along with AVM extending the datastores, this makes the file system transparent and beneficial to VMware (assuming you are adding drive to the file system). AVM will extend the file system to the next available empty volume, meaning if you add drives to the file systems you will be increasing the performance of your virtual machines.
Availability Benefits
Using VNX, Snapsure snapshots can be taken of the NFS snapshots and mounted anywhere in both physical and virtual environments. NFS Snapshots will allow you to mount production datastores in your virtual environment to use them for testing VM’s without affecting production data. Leveraging SnapSure will allow the end-user to keep up with certain RTO and RPO objectives. SnapSure can create 96 checkpoints and 16 writable snapshots per file system. Not to mention the ease of use Snapsure has over SnapView. Snapsure is configured at the file system level, just right-click the file system, select how many snapshots you need, add a schedule and you’re finished.
From my experience in the field the end-user finds this process much easier than SnapView or replication manager. Using VNX, NFS will also enable the user to replicate the file system to an offsite NS4-XXX without adding any additional networking hardware. VNX Replicator allows the user to mount file systems on other sites without affecting production machines. Users can replicate up to 1024 file systems, and 256 active sessions.
Networking Benefits
VNX datamovers can be purchased with 1 GB/s or 10 GB/s NICs. Depending on your existing infrastructure, the VNX can leverage LACP or ether channel trunks to increase the bandwidth and availability of your NFS file systems. LACP trunks enable the datamover to monitor and proactively reroute traffic from all available NIC’s in the Fail Safe Network, therefore increasing storage availability. It has been my experience interacting with customers who are leveraging 10GB on NFS, that they have seen a huge improvement in R/RW to disk and storage, as well as VMotion from datastore to datastore with up to 100% bandwidth and throughput.

Tech For Dummies: Cisco MDS 9100 Series Zoning & EMC VNX Host Add A “How To” Guide" By: Eli Mercado


Before we begin zoning please make sure you have cabled each HBA to both switches assuring the host is connected to each switch. Now let’s get started …
Configuring and Enabling Ports with Cisco Device Manager:
Once your HBAs are connected we must first Enable and Configure the ports.
1. Open Cisco Device Manager to enable port:


2. Type in the IP address, username and password of the first switch:
 
3. Right-click the port you attached FC cable to and select enable:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-3.jpg
Cisco allows the usage of multiple VSANs (Virtual Storage Area Network). If you have created a VSAN other than VSAN 1 you must configure the port for the VSAN you created.
1. To do this, right-click the port you enabled and select “Configure”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-4.jpg
2. When the following screen appears, click on Port VSAN and select your VSAN, then click “Apply”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-5.jpg
3. Save your configuration by clicking on “Admin” and selecting “Save Configuration”, once the “Save Configuration” screen pops up and requests you to select “Yes”:
http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-6.jpg
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-7.jpg
 Once you have enabled and configured the ports, we can now zone your Hosts HBAs to the SAN.
Login to Cisco Fabric Manager:
1. Let’s begin by opening Cisco Fabric Manager:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-8.jpg
 2. Enter FM server username and password (EMC Default admin; password) , then clock “Login”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-9-.jpg
3. Highlight the switch you intend to zone and select “Open”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-10.jpg
4. Expand the switch and right-click “VLAN”, then select “Edit Local Full Zone Database”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-11.jpg
Creating An FC Alias:
In order to properly manage your zones and HBAs, it is important to create an “FC Alias” for the WWN of each HBA. The following screen will appear:
1. When it does, right-click “FC-Aliases” and select “Insert”, once selected the next screen will appear. Type in the name of the host and HBA ID, example: SQL_HBAO. Click the down arrow and then select the WWN that corresponds to your server, finally click “OK”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-12.jpg

 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-12.jpg
Creating Zones:
Now that we have created FC-Aliases, we can now move forward creating zones. Zones are what isolates connectivity among HBAs and targets. Let’s begin creating zones by:
1. Right-clicking on “Zones”.
2. Select “Insert” from the drop down menu. A new screen will appear.
3. Type in the name of the “Zone”, for management purposes use the following format _ Example: SQL01_HBAO_VNX_SPAO.
4. Click “Ohttp://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-13.jpgK”:
  Note: These steps must be repeated to zone the hosts HBA to the second storage controller. In our case, VNX_SPB1.
 Adding Members to Zones:
Once the Zones names are created, insert the aliases into the Zones:
5. Right-click on the Zone you created.
6. Select “Insert”, and a new screen will appear.
7. Select “FC-Alias”, click on “…” box then select Host FC Alias.
8. Select the target FC Alias, click “OK”, and click “Add”:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-14.jpg
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-15.jpg
Creating Storage Groups:
Now that we have zoned the HBAs to the array, we can allocate storage to your hosts. To do this we must create “Storage Groups”, which will give access to LUNs in the array to the hosts connected to that array. Let’s begin by logging into the array and creating “Storage Groups”:
1. Login to Unisphere and select the array from the dashboard:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-16.jpg
2. Select “Storage Groups” under the Hosts tab:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-17.jpg
3. Click “Create” to create a new storage group:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-18.jpg
4. The following screen will appear, type in the name of the storage group. Typically you will want to use the name of the application or hosts cluster name.
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-19.jpg
5. The screen below will pop up, at this time click “Yes” to continue and add LUNs and Hosts to the Storage Group:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-20.jpg
6. The next screen will allow you to select wither newly created LUNs or LUNs that already exist in other Storage Groups. Once you add the LUN or LUNs to the group, click on the hosts tab to continue to add hosts:
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-21.jpg
7. In the hosts tab, select the Hosts we previously zoned and click on the forward arrow. Once the host appears in the right pane, click OK:  
 http://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-22.jpg
8. At this phttp://www.integrateddatastorage.com/wp-content/uploads/2012/04/Blog-23.jpgoint a new screen will pop up, click YES to commit.

Once you have completed these tasks successfully, your hosts will see new raw devices. From this point on, use your OS partitioning tool to create volumes.