Find study materials for any course. Check these out:
Browse by school
Make your own
To login with Google, please enable popups
To login with Google, please enable popups
Don’t have an account?
To signup with Google, please enable popups
To signup with Google, please enable popups
Sign up withor
- Ethernet NIC card
- TCP/IP offload engine (TOE) card
- iSCSI HBA
- FC end points
- Converged Enhanced Ethernet (or CEE)
- Converged Network Adapter
- FCoE switch
- Used for data sharing over geo dispersed SAN
A group of servers and other necessary resources, coupled to operate as a single system.
This is the point in time to which systems and data must be recovered after an outage. It defines the amount of data loss that a business can endure. A large RPO signifies high tolerance to information loss in a business.
A repository at a remote site where data can be periodically or continuously copied (either to tape drives or disks), so that there is always a copy at another site.
> Eliminating Single Points of Failure
> Multi-pathing software
> Backup / Restore
> Replication (Local / Remote)
> Full backup is a backup of the complete data on the production volumes at a certain point in time.
> Cumulative (or differential) backup copies the data that has changed since the last full backup.
>Incremental backup copies the data that has changed since last full / incremental backup.
> Client - Sends data to b/u server or storage node
> Backup Server - manages b/u operations and manages catalog
> Storage Node - writes to b/u device
> Backup Service - Stores b/u data
Backup and Restore Operation
Backup Operation - backup server initiates the b/u process for different clients based on the b/u schedule configured for them.A restore process is manually initiated by the b/u client.The b/u server identifies the backup media required for the resto
Technology that conserves storage capacity and/or network traffic by eliminating duplicate data.
> Single Instance Storage (SIS)
>> Identifies and removes copies of identical files
> Sub-file Deduplication
>> Identifies and filters repeated data segments
>> Reduces file size
Replication is the process of creating an identical/exact copy of data.
Drivers for Replication
> alternate source for backup
> fast recovery to facilitate faster RPO and RTO
> Enabling decision support activities, such as reporting.
> Developing and testing proposed changes
> Restarting an application from the replica
> Point in Time (PIT)
>> Non-Zero PIT
>> Near-Zero PIT
Process of replicating data within the same array or the same data center.
Compute Based Replication
> Logical Volume Manager (LVM) based mirroring
>> write to a logical partition is written to the two physical partitions by the LVM device driver
> File System Snapshot
>> Requires a fraction of the space used by the production OS
Full Volume Mirroring
> Target is attached to the source and established as a mirror of the source.
> Data on the source is copied to the target.
> New updates to the source are also updated on the target.
> Target is unavailable while attached
Production Compute System
> A compute system accessing data from one or more LUNs on the storage array.
> These LUNs are known as source LUNs (devices / volumes), production LUNs, or simply the Source.
> LUN(s) on which the data is replicated is target LUN.
Pointer Based Full Volume Replication
> provide full copy of the source data on the target
> target is made immediately available at the activation of the replication session
> The time of activation defines the PIT copy of source
> can be activated in either the Copy on First Access (CoFA) or on the Full Copy mode.
Pointer Based Virtual Replication
> Targets do not hold data, but hold pointers to where the data is stored
> the target is immediately accessible
> uses Copy on First Write (CoFW) technology
> When a write is issued to the source for the first time after session activation:
>> original data copied to "save Location"
>> pointers point to data
The process of creating replicas of production (local) data to remote sites (locations)
Synchronous Remote Replication
> writes must be committed to the source and the target, prior to acknowledging “write complete” to the compute system
> additional writes on the source cannot occur until each preceding write has been completed and acknowledged
> application response is extended
Asynchronous Remote Replication
> write is committed to the source and immediately acknowledged to the compute system
> Data is buffered at the source and transmitted to the remote site later
> average bandwidth
> long distance
Compute-based Remote Replication
> all the replication is done by using the CPU resources of the compute system using a software that is running on the compute system.
> LVM-based: Writes to the source volumes are sent to the remote compute system by the LVM
> Database replication via log shipping
Storage Array Based Remote Replication
> replication is performed by the array operating environment
>> disk buffered
>>> combination of local and remote replication
>>> RPO in the order of hours
>>> Low bandwidth requirement
>>> Extended distance solution
> Three-site replication - Replication may be synchronous to one of the two sites, and provides a zero-RPO solution. It may be asynchronous or disk buffered to the other remote site, and provides a finite RPO.
> SAN-based remote replication - allows replication between heterogeneous vendor storage arrays
Continuous Data Protection (CDP)
> data changes are continuously captured and stored in a separate location from the primary storage. RPO are arbitrary and not required to be defined in advance.
must be a dedicated volume on the SAN-attached
storage at each site. It stores configuration information about the CDP appliance.
stores all data changes on the primary storage. The journal contains the metadata and data that will allow rollbacks to various recovery points. The amount of space that is configured for the journal will determine how far back in time can go.
The data volumes to be replicated
The Write splitter intercepts writes from the initiator and splits each write into two copies; one copy is sent to CDP appliance for replication and the other, to the designated production volume.
Clasic Data Center (CDC) Management Activities
> monitoring and alerting reporting
> availability management
> capacity management
> performance management
> security management
Compute systems, networks, and storage
> accessibility - availability of a component to perform a desired operation.
> capacity - amount of storage available.
> performance - efficiency of components
> security - track & prevent unathorized access
Alerting of events
Alerting of events is an integral part of monitoring.
> Information alerts provide information that does not require intervention
> Warning alerts require administrative attention
> Fatal alerts require immediate attention
Reporting on CDC resources involves keeping track and gathering information from various components / processes.
> Capacity planning
> Chargeback reports
> Performance reports
Capacity planning reports
Capacity planning reports also contain current and historic information about storage utilization, file
system, database tablespace, and ports.
Chargeback reports contain information about the
allocation or utilization of CDC infrastructure components by various departments or user groups.
Performance reports provide details about the performance of various infrastructure components in a CDC.
Establishing a proper guideline for all configurations to ensure availability based on service levels.
> Eliminate single points of failure
> Perform data backup and replication
> Ensures adequate availability of resources for all services based on their service-level requirements.
> Manages resource allocation
> Trend analysis of the actual utilization of allocated storage.
> ensures the optimal operational efficiency of all components.
> Performance analysis
>> identify bottlenecks
>> fine tune to enhance performance
> Prevents unauthorized access or activities
>> Compute - managing the user accounts and access policies
>> SAN - config of zoning to restrict an HBA’s unauth access to the specific storage array ports.
>> storage array -LUN masking
Challenges of Information Management
> Exploding digital universe
> Increasing dependency on information
> Changing value of information
Information lifecycle management (ILM)
A proactive strategy that enables an IT organization to effectively manage data throughout its lifecycle, based on predefined business policies. This allows the optimization of the storage infrastructure for maximum return on investment.
An ILM strategy should be Business-Centric
It should be integrated with the key processes, applications, and initiatives of the business to meet both the current and future growth in information.
An ILM strategy should be Centrally Managed
All the information assets of an organization should be under the purview of the ILM strategy.
An ILM strategy should be Policy-based
The implementation of ILM should not be restricted to a few departments. ILM should be implemented as a policy and should encompass all business applications, processes, and resources.
An ILM strategy should be Heterogeneous
An ILM strategy should take into account all types of storage platforms and operating systems.
An ILM strategy should be Optimized
As the value of information varies, an ILM strategy should consider different storage requirements and allocate storage resources based on the information’s value to the organization
An ILM strategy should be Tiered Storage
Tiered storage - different storage levels so as to reduce the total storage cost. Each tier has different levels of protection, performance, data access frequency, and other considerations. Information is stored and moved among tiers based on value.
Storage Solution: EMC Symmetrix VMAX
Intelligent Storage Solution
> Virtual Matrix provides an interconnect that enables resource sharing across all V-Max engines that in turn enable massive scale out.
> Enginuity is the OS. It provides simplified storage management and provisioning.
Storage Solution: EMC VNX
> different connectivity options
>> SAN (that is block connectivity with iSCSI, Fibre Channel or Fibre Channel over Ethernet)
>> NAS (that is CIFS, NFS )
>> Cloud (for REST or SOAP)
Storage Solution: EMC Connectrix
> Enterprise directors - high port density and high component redundancy
> Departmental switches - best for workgroup, mid-tier environments.
> Multi-protocol routers - support mixed iSCSI and FC environments.
>> can bridge FC SAN and IP-SAN
Continuous Data Protection (DCP) Elements
> CDP Appliance -runs CDP software & remote replication
> storage volumes - repository, journal, & replication
> write splitter -intercepts write from initiator and splits each into two copies
Sign up for free and study better.
Get started today!