1 / 53

Tricia Jiang Storage Enablement tricia@us.ibm November 19, 2013 – futures.

Tivoli Storage Manager 7.1 Features Operations Center TSM Server and Client HSM for Windows FastBack for Workstations. Tricia Jiang Storage Enablement tricia@us.ibm.com November 19, 2013 – futures. TSM 7.1 Family. FCM for Windows

montana
Download Presentation

Tricia Jiang Storage Enablement tricia@us.ibm November 19, 2013 – futures.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tivoli Storage Manager 7.1 FeaturesOperations CenterTSM Server and ClientHSM for WindowsFastBack for Workstations Tricia Jiang Storage Enablement tricia@us.ibm.com November 19, 2013 – futures.

  2. TSM 7.1 Family FCM for Windows • Distributed GUI – single pane of glass access all FCM Windows instances • Support for Exchange Server 2013 • Exchange mailbox and item level restores from TSM for VE backups • Enhanced support for SQL Server 2012 Availability Groups • Availability of PowerShell cmdlets to drive FCM operations • Multi-language support for Exchange IMR browser • Exchange mailbox restore to PST files greater than 2GB • Automatically stop Exchange services for Instant restore FCM for UNIX • Support for application data stored on NetApp/N series NAS file systems  • Support for GPFS snapshots to support DB2 pureScale for AIX and Linux • Support operation in a Vmware Virtual Machine FCM for Vmware • Support for VMware vSphere5.5 • Instant restore for Virtual Machine File System (VMFS) datastores • Enable operations in VMware environments where Site Recovery Manager (SRM) is deployed DP for Domino • Domino 9 • Windows 2012 • 64-bit Linux x86_64 support DP for Oracle • Oracle on Windows 2012 • Oracle 12c support DP for Exchange • Support for Exchange 2013 • Exchange on Windows 2012 servers • Remote GUI • Multi-language support in mailbox restore browser GUI • Mailbox restore to PST files greater than 2GB • Mailbox restore to Unicode PST files • Exchange mailbox and item level restores from Disk Files • Collecting mailbox history for improved backup performance • Calendar widget for entering dates in MMC GUI • instant restore - automatically stop Exchange services that must be stopped • IIMR browser GUI - more efficient queries of data in Recovery Database • New Powershellcmdletinterface DP for SQL • Remote GUI • Extend SQL backup commands to improve usability of SQL AlwaysOnAvailability Group (AAG) backups through more wildcard filtering options FB4WKS • Add reporting to Central Admin Console • Central admin console scalability • Rebrand CDP starter edition & OEM to FB4wk Starter Edition & OEM TSM 7.1 Server • Increased scalability with up to 10x improvement for daily ingest • Automated fail-over in node replication • Upgrade to DB2 10.5 • IBM Install Manager packages for TSM server • Multi-threaded Storage Pool Migration • Collocation by Filespace Group • TSM server DB backup over Shared Memory instead of TCP/IP • Windows Server 2012 in Core Server mode & Centera on Linux x86_64 • Improve Server Scalability for Large Objects • SAN Mapping & audit libvol TSM 7.1 Operations Center • Functions and views based on Admin authority • Immediate backup of at risk clients, TSM db , logs and storage pools • Editable Client and Server properties • Activity log viewer • Create TSM client wizard • Server processes and session viewer and cancel button TSM 7.1 B/A client • Resiliency enhancements & HTTPS protocol support for NetApp snapshot-assisted progressive incremental backup (SnapDiff) • Automated System Recovery for UEFI or EFI hardware • Macintosh BA client – add SSL support • Linux BA client support for Btrfs (B-tree file system) DP for VMware • Full VM instant restore • Stand-alone DP for VMware user interface • Filespace level collocation groups to support VE • vCloud Director Tenant vApp support • DP for Exchange/FCM for Windows IMR support for DP for VMware mounted snapshots • Backup/Recovery of vms w/ Microsoft AD • DB-level recovery from vms hosting Microsoft SQL • Report on “What Failed Last Night” HSM for Windows: • Backup Performance Enhancements • ReFS support • ACL independent migrate/recall • Administrative stubfile recall Space Management – Unix • Seamless Failover/Failback • Improve handling of streaming recall arguments • Improve removal function for multiserver managed file systems • Provide command to verify the state of an HSM node • Recall files to state resident - LTFS only

  3. IBM Tivoli Storage Manager Operations Center TSM 6.3.4 introduced: Collecting status and monitoring data for the UI Monitoring and lifecycle management of alerts that: Notifies the Operations Center as alerts occur & optionally emails alerts directly to designated administrator A data collection engine consolidates “singular” view of multiple TSM server for a given operations center TSM 7.1brings ability to: Register new clients to TSMincluding selecting a policy and associating to a schedule Re-drive backup for selected client Kick of TSM server db backup Cancel specific sessions and processes Edit properties for clients, servers and global settings View details of TSM server, including view to sessions and processes Utilize different classes of TSM administrators to accomplish various tasks • Small install package and foot print • Installs in just a few minutes A web-application that uses: • A visual dialogue with administrators delivering decision point information necessary for successful use of the product • An integrated command-line to bridge from visibility to control • Hub/Spoke model monitors multiple TSM servers • Liberty webserver & dojo as primary technology • Visual dialogue with administrators providing decision point information necessary for successful use of TSM

  4. TSM Operations Center 7.1 1H13 2H13 1H14 2H14 ... Visibility (Status, Monitoring, Alerts) Control (Actions, Commands, Workflows, Configuration, Settings) Automation (Complex tasks, wizards, advisors, best practice guidance) • Will continually evaluate and re-prioritize based on feedback • Transparent development part of the agile, iterative and UCD models • Bridge from visibility to control

  5. Role Based Security • Various privileges depending on the hub server’s administrator authority: • System: full read and write • Policy: manage backup & archive sessions • Storage: Manage server storage • Operator: Manage alerts and client sessions • Node Owner: open web interface of a client from OC • Can use Grant Authority command to change authority • A different icon for different admin roles • Can hoover over icon to see role • Different actions for different roles • Grays out actions not privileged to do • Pop-ups tell why can't perform action

  6. Log Viewer for Alerts • Shows 500 messages that preceded the alert and 500 that follow • Alerts are stored in hub server • Available even if spoke server down • Messages pulled in real-time directly from server

  7. Detailed Server Overview • Activity and Maintenance over last 24 hours • Primary and copy storage pool capacity over 2 weeks • Restarted line shows if TSM server restarted • Can hover over to get date and time • DB Backup line shows if DB backup taken successfully

  8. Detailed Server Alerts • Shows alerts just for this server • Select an alert, and the corresponding activity log messages appear below • Will show 500 messages before the alert • Will show 500 message after the alert

  9. Server Properties • You can change the following server properties: • Security properties • Password authentication • Password common expiration period • Invalid sign-on attempt limit • Minimum password length • Client-registration permission • Session and schedule properties • Scheduling mode • Duration for one-time actions • Schedule randomization • Client-polling interval • Schedule-event retention period • History and log properties • Activity-log data retention period • Activity-summary data retention period • Accounting-record creation setting • Replication properties • Outbound-replication setting • Peer server (target replication server) • Default archive rule • Default backup rule • Default space-management rule • Replication-history retention period • Allows editing of certain properties, if you have admin privileges • Will prompt you if you make changes and then navigate away from this page with out saving changes

  10. Server Active Tasks • Tasks = sessions and processes • Can filter by Name / ID / State / Type • Can Choose type: Process / Administrator/ Client / Server • Messages pertaining to event shows up in bottom portion of screen • Up to 1000 messages will show up

  11. Active Tasks - Cancel • Can cancel one or more tasks • Will show if cancel succeeded

  12. Completed Tasks • Only for 7.1 TSM servers • Shows completed client node sessions and processes for last 24 hours • Can export info • Shows only parent tasks • Shows if successful • Shows how many messages apply to that session up to 1000

  13. Backup Database • Must have correct admin privileges • Can choose type of backup • Full or db snapshot • Device class choice has status information to help you choose correct devclass • “Close & View Tasks” button will take you to server detail page

  14. Backup Storage Pools • Must have correct admin privileges • If a copy stgpool is not set, you will not be able to backup • Copy Storage Pool has status information to help you choose correct stgpool • “Close & View Tasks” button will take you to server detail page

  15. Backup Now for Clients • Client must be associated with a TSM schedule • Must be server-prompted scheduling mode • Will re-drive the last schedule that failed

  16. Edit Properties - Client • You can change the following properties: • General properties • Policy domain • Option set • User contact information • Client URL • Data-management properties • Maximum number of mount points • Data write path • Data read path • Archive-deletion permission • Backup-deletion permission • Compression setting • Collocation group • Deduplication location • Replication properties • State • Mode • Default archive rule • Default backup rule • Default space-management rule • Will prompt you if you navigate away from page with out saving changes • Can click “x” in fields to reset values to TSM default value

  17. Global Settings • Will prompt you if you navigate away from page with out saving changes • Will be grayed out if you don’t have correct TSM admin privileges • Will make changes globally to hub and all spoke servers • Can undo twisty on succeeded message to see all the commands that were issued

  18. Replication Column for Clients • Replication– None, Send, Receive or SyncSend / SyncRecv if import/export • Peer Server – name of the peer replication server • V7.10 server – nodes on 7.1 server will have the above info • V6.34 spike & V7.10 hub server – the nodes on the 7.1 hub server will have info, spoke server nodes will not have any info • V6.34 server – no replication info - shows up as “unreported”

  19. Client Quick Look • Node Replication information added

  20. Update At-Risk Settings for Client • Only available on 7.1 TSM servers • Can utilize Global Default Settings • Can choose not set at risk • OR • Can customize when node becomes at risk

  21. Client Provisioning • Simple, intuitive &quick way to register nodes • Can choose with hub or spoke server to add node to • Can choose SSL, replication & deduplication if available on server • Validates things like names and passwords • Can choose policy domain • Can choose schedule(s) to associate with • Can choose option set • Works with 6.3.4 TSM servers, but no “at risk” definitions

  22. TSM Operations Center 7.1 • The Java level has been upgraded • From: Java(TM) SE Runtime Environment 1.6.0 (jvmwa6460sr9-20110324_78506) • To: Java(TM) SE Runtime Environment 1.7.0 (pwa6470sr5-20130619_01 (SR5)) • The Liberty Web server has been upgraded • From: 8.5.0.1 (wlp-1.0.1.cl0120121004-1949/websphere-kernel_1.0.1) • To: 8.5.5.0 (wlp-1.0.3.20130608-1425) • 7.1 Operations Center requires the Hub to be a 7.1 TSM server • Spoke servers are permitted to be at version 6.3.4 through version 7.1 • A Spoke server can not be at a higher version than the Hub server • On Line Space Calculator: http://www-01.ibm.com/support/docview.wss?uid=swg21641684 • Additional disk space for every 1000 clients on hub, • Add 2GB for database (same as v634) • Add 10GB for archive log (same as v634) • Hub must be sized to also contain spoke data • Additional disk space for every 1000 clients on v634+ spoke • Add 2GB for database • Add 10GB for archive log • Additional disk space for every 1000 clients on v7 spoke • Negligible impact to database • Add 600MB for archive log • V7 spokes no longer push all status data to hub, and hub no longer needs to store and scan it all to generate grids

  23. Node X DB2 DB2 Replication with Automated Failover Server B Server A 3 Database Database 2 4 5 Storage Hierarchy Storage Hierarchy 1 Client Node X • Pre-failure • Server A sends failover server connection information to client, which stores information locally • Server A replicates node data and metadata to secondary server B • Failover • Server A becomes unavailable • Based on failover server connection information and policy, client is automatically redirected to Server B for retrieve operations • Failback • When server A becomes available, client operations are directed back to server A • Customer requiring “hot” stand-by environments for their production servers (including DR) can use this capability to provide hot stand-by servers and satisfy these requirements 23 23 23 23 23

  24. Node Replication Server Considerations • Fail Over – How it works • Source & target server exchange most current failover info during all connections • Source server provides client with target info during normal operations • What is Required for Failover • Source and target server must be 7.1 and configured for node replication • Best to migrate both servers to 7.1 at same time • Otherwise, migrate target server first, replication continues even if source is not 7.1 • If replication is done in both directions, migrating only one server at a time will prevent replication in one direction until other server is migrated • 7.1 client registered to source replication server and enabled for replication • At least one replication must be performed after migrating to 7.1 • Planning • Set FAILOVERHLA with High level communication address for failover • Ensure communication method configured on both source and target • Enable replication for all node definitions that will be required for recovery • Nodes with data to be replicated • Proxy Agent nodes • Authorized nodes from a filespace authorization rule

  25. Node Replication Server Configuration • Start or Schedule Replication • QUERY NODE verifies failover server: “Last Server Replicated to” • QUERY REPLSERVER - replication server’s fail over communication parameters • Replication operations run as in prior releases • Redirecting replication to different server eventually updates client's failover config • 7.1 replicates additional data • Proxy agent nodes of a replicated node • Filespace authorization rules • Associated authorized nodes from file space authorization rules • Administrative ids (same name) which have client owner authority • Client options sets referenced by replication nodes • Node definitions with no data – no filespacedefinitions

  26. Node Replication Client Side • Flow of Operation • Automated Client Fail Over for Recovery of Data (backup disabled!) • Administrators not required to manually switch clients from primary to secondary server • Clients enhanced to automatically switch to the secondary server • If client is not able to connect to the primary server • Uses the saved fail over connection information from the client options file • Attempt to sign on to the fail over server • If there is no saved connection information in the client options files • The log in to the primary server will fail as it does today • Any time the client signs into the secondary server in fail over mode • Messages will be issued indicating that the client is in fail over mode • Can set an “ALERT” on this in the Operations Center • Clients can immediately start data recovery (restore, retrieve) operations • Each time client has to sign on, it will first check to see if Primary server is up

  27. New Commands: • • QUERY REPLSERVER • View information for all servers that are performing replication with this server • • REMOVE REPLSERVER • Delete ALL replication state info for all replicated nodes on specified • Issue from both target and source server • • SET FAILOVERHLAADDRESS • Server level IP address that clients use to connect during fail over if it is different than what replication uses • New Attributes in Server Database: • • Node definition ( Q NODE F=D, SELECT) • Last Server Replicated to • Replication Primary Server • • Filespacedefinition ( Q FILESPACE F=D, SELECT ) • Last Store Date • Last Archive Date • • Replicating Servers ( Q REPLSERVER ) • Failover Address – HLA, LLA , SSL Port • HeartBeat timestamp • • Server wide attributes ( Q STATUS ) • Failover HLA for a target server • • Sessions Indicate Failover state ( Q SESSION ) New Node Replication Attributes, Commands & Client Options • New Client Options: • MYREPLICATIONSERVER • Server to use for replication fail-over • REPLSERVERNAME • Begins a stanza to be used for replication fail-over • REPLTCPSERVERADDRESS • Within a replication stanza, TCP server address • REPLTCPPORT • Within a replication stanza, TCP server port • REPLSSLPORT • Within a replication stanza, TCP server SSL port • REPLSERVERGUID • Within a replication stanza, TCP server GUID • MYPRIMARYSERVER • Primary server name

  28. Server DB backup over Shared Memory instead of TCP/IP • Can configure TSM Server to perform dbbackups & restores via shared memory rather than TCP/IP loopback • Avoids TCP loopback adapter problems which can severely degrade database backup performance • Recommended for customers who experience significantly degraded database backup performance • Recommend for customers using UNIX for Linux, particularly on AIX • Configuration wizard updated to provide choice between shared memory &TCP/IP for db backup/restore • Shared memory selected by default • Upgrade wizard automatically configures servers for shared memory dbbackups &restores if the server was already configured to use shared memory • Manual configuration for an existing instance • Change the API options file and, if needed, the server options file • Database backups using shared memory demonstrated: • Best case: significant potential improvements in CPU and in throughput in the best case • About %18 throughput improvement fordb backup with single and 2 streams • About %3 and 4% throughput improvement for db restore with single and 2 streams • About %19 and 22% CPU improvement for db backup with single and 2 streams • About %24 and 21% CPU improvement for db restore with single and 2 streams • Worst Case: comparable performance • For more than 2 streams, dbbackup/restore throughputs don’t scale well, due to hw(disk) IO limitation • Unix & Linux TSM Server only in TSM 7.1

  29. Extend DBSPACE Distributes data across all db directories • Extending database space now includes the process of distributing data across all database directories and then reclaiming unused space &returning it to the system • EXTEND DBSPACE command adds directories to the database • 7.1 includes redistribution of data & reclamation of space to make new database directories available for immediate use and to improve parallel I/O performance • Requires additional planning, because the redistribution operation takes considerable system resources • Complete the process while the server is not handling a heavy load • If server is offline, DSMSERV EXTEND DBSPACE, performs the same function as the EXTEND DBSPACE • extend dbspacedb_directoryreclaimstorage=yes

  30. Increased Scalability with up to 10x improvement • Increased scalability • Up to 10x improvement for daily ingest, deduplicatedand replicated data • Over previous published rates & reference architectures optimized for data deduplication • Total time required for TSM server to ingest, deduplicate, replicate, expire and reclaim data is reduced significantly between TSM v6.3 and TSM v7.1 • Changes Made • Primarily internal algorithm changes and updates to TSM schema • Deduplication changes • Large Object transaction changes • Tuning and best practice guides will be updated • Improvement figure was also benefit from improvements obtained from a new reference architecture (blueprint project) • Extend scale and performance

  31. Goal: Reduce the overall footprint of the deduplicationengine • 1. Allow for more concurrency between the various processes that access the deduplicationengine • 2. Streamline resource intensive deletion of deduplicateddata to run in shortest amount of time possible • 3. More concurrency for retrieval operations (I.E. client, replication, etc..) • 1. Concurrency of processes -What changed? • A new database flag was added, and will be set by reclaims, move data or client-side dedupe to mark files as completely dehyradated once all common chunk data has been squeezed out • The new flag allows for simplified movement of deduplicated data, because it does not require that the deduplication layout be known to move the data around the storage pool (Intra-pool) • 1. What to expect? • Once at 7.1, all new aggregates will have flag set • Over time, reclamation should complete in a shorter window and require much less DB2 resources • TSM/DB2 will have more time to spend on other operations, thus reducing the housekeeping window • At a minimum, the DB2 footprint for data movement/copy will go down • 2. Deletion of deduplicated data – what changed? • Complete overhaul of background process to streamline the process of analyzing and processing de-referenced chunks • 2. What to Expect? • Processing de-referenced chunk deletions is faster: in-house testing showed upwards of 5x improvement • 3. Retrieve and restore - What was changed in the area of serialization? • Reduce the period of time locks are held during a restore/retrieve operation • Update the lock modes to be more compatible with other access requests while maintaining the right protections • Instead of locks being held across the entire transaction, all locks released once the requested file has been retrieved • 3. What to expect? • Retrieval operations will potentially perform much better even when other back-end operations are taking place • As a result of increased concurrency capabilities, not only will overall performance of things like replication increase, but the number of resource conflicts will be minimized Deduplication Enhancements

  32. TSM Server Infrastructure Enhancements Break up very large objects into smaller objects with separate transactions • Avoids issues of backing up muti-TB objects in a single transaction • Previously if client sent a 50g object, TSM server stored 1 50g bitfile • May or may not be deduplicated • Creates possible locking, scalability and server stability problems • Now, client sends 50g, but TSM server stores it as 5 10g fragments • Includes a specific re-construction sequence • 5 fragments together form the Super Aggregate • While in a pool, each fragment is managed independently • During IDENTIFY processing, each fragment is processed by itself, and committed • During RECLAMATION processing, instead of having to move an entire 50G object that might span out of a reclaimable volume, move only the fragments completely residing on or spanning out of a reclaimable volume • During retrieve, recall, and restore processing, each fragment is opened independently for retrieval • Continue to provide protection for larger and larger files

  33. DB2 DB2 DB2 DB2 Uplift to DB2 V10.5 TSM V7.1 TSM V6.3 TSM V6.2 TSM V6.1 • DB2 V10 • Limits: • DB2 V9.5 • Limits: 1 TB or 1 Billion objects • DB2 V9.7 • Limits: 2 TB or 2 Billion objects • DB2 V9.7 • Limits: 4 TB or 4 Billion objects • Benefit • Currency for database technology • Leverage enhancements where appropriate • Benefit from continued performance, RAS, security, and other updates being made by DB2 • Notes: • 10.5 can’t restore Online backups taken by DB2 9.7, so db backs up right after upgrade • DB2 10.5 does not support Solaris 11, so TSM server will not be able to support it either • DB2 10.5 drops support for RHEL 5, so TSM server will also have to drop support • DB2 10.5 does not support upgrade from a 9.5 instance • TSM 6.1 users must upgrade to 6.2 or 6.3 before 7.1 33

  34. Install Changes • New Installer for IBM Tivoli Storage Manager server • Using IBM Installation Manager • New installer components • Server, storage agent, devices, server language packs, license, and operations center • New installer features • Faster install with Improved logging • All TSM server products are part of the same package group and can be managed together • A single repository can be used to remote install • Server and storage agent can now be installed on same system on Windows • Upgrading to new installer • IBM installation Manager is separately installable software and is faster, more reliable • Can upgrade directly from a previously installed server • Only exception is a 6.1.x server • No more native packages on UNIX/Linux • Server, storage agent, license, devices, language packs, and storage agent now managed by IBM Installation Manager • Unified installation packages • Prevent install if prerequisites not met

  35. Improved efficiency for migration from stgpools with DISK devclass for nodes with a large number of filespaces • Migration from sequential storage pools is volume based, not node based • Applies to migration to pools that do not have collocation specified, or that have collocation by filespace • . • Previously, multi-process migration from DISK was limited to a single node being processed on a given thread • No parallelism for nodes with lots of filespaces, because the work was not spread/shared to the other threads • Now, migration workloads from DISK will be better shared & optimized across the threads assigned to the process • Allows a node with many filespaces to achieve parallelism by having the work for node1/fs1/archive data assigned to one thread, while node1/fs2/backup data is assigned to another and so on... • Nodes with just a few very large filespaces achieve somewhat better parallelism as work is spread across more then a single thread • Does NOT help when a client node has a single "monster" filespace • Example • UPDATE STGPOOL FILEPOOL COLLOCATE=FILESPACE • UPDATE STG DISKPOOL MIGPROCESS=4 Multi-Threaded Storage Pool Migration

  36. New TSM Server Platform Support • Support Windows Server 2012 in Core Server mode • Windows Server 2012 R2 not be supported in TSM 7.1 server • Planning to test it and claim support for it next year • TSM 7.1 OC will install and run on Windows Server 2012 R2 • IE11 is not supported yet • TSM Client will support 7.1 for Windows 2012 R2 & Windows 8.1 on existing Windows functionality • No support for any new Windows functionality until a later version of the client • Support Centera on Linux x86_64

  37. AUDIT Libvolume Command • AUDIT LIBVOLUME determines whether or not a tape volume is intact • >>---- AUDIT LIBVolume ---- library_name ------ volume -- • Does not involve TSM copy pools or database • If a volume is damaged, users can create a new copy of this volume, if a copy exists • Performed at a tape drive level • Tape drive performs tape HCRC checking on a volume • HCRC is generated by the tape drive based on data block length • Checking takes along time to finish (about 5.5 hours for LTO5 and 8.5 hours for LTO6) • Can not cancel this audit • New command introduced in TSM 6.3.4.20 release • Windows and AIX platforms only • Support SCSI and VTL libraries • Supported by IBM: LTO5 and LTO5+drives & IBM TS1140 (JAG4) • LTO5: BBNE (FF)/ BBNE (HH) or later • JAG4: 3829 or later • Tape driver level: Atape driver level: 12.6.20, IBM tape driver for Windows: 6.2.2.0

  38. SAN Mapping – New Tool & Support of HBA API version 2 • TSM SAN discovery function now works on a dual-port FC adapter which presents as a single two-port HBA to a host system • HBA API version 2 gets info on the second port even if the first port has devices connected • New stand alone tool gets device information from SAN directly - dsmsanlist • Packaged with TSM server & storage agent, but runs stand alone (no options/parameters) • Output displayed on screen and saved as dsmsanlist.out • For HBA: name, manufacturer, serial number, node WWN, model, description, firmware & driver level, driver name • For device :port ID, vendor ID, product ID, type, serial number, port WWN, device name • dsmsanlist directly interacts with HBA API to get device information • Available for AIX, Linux (x86_64 and zSeries), Solaris SPARC platforms • AIX Example:

  39. TSM Client Faster incremental Backup of IBM N series / NetApp filers File level recoverability NetAppSnapDiff – Enhanced resiliency Avoid time consuming scan of filer to determine backup candidates • TSM BA client to use NetApp API to identify new, changed, deleted files • Eliminates the traditional TSM incremental backup file scan for NFS/CIFS attached filers • Provides progressive incremental backup with file level recovery TSM Server • TSM 7.1 Snapshot differential backup enhanced to be more robust and reliable • Minimize the need to run full progressive incremental backups • Address issue where failed files were not backed up or were backed up redundantly • Implements persistent change log • Change log synchronized with what is actually backed up on the TSM server (same integration points as JBB) • Entries deleted as they are committed on the server (JBB integration points) • Files failed during the backup remain in the change log and will be processed by future differential backups • Previously, failed files would only be backed up with a full progressive incremental backup • Automatic failover to full progressive incremental backup if change log becomes corrupted or becomes out of sync • One change log per filer volume, stored in the client staging directory • Specific naming convention implemented 39

  40. SnappDiff - HTTPS protocol support • snapdiffhttps= snapshot differential backups of NetApp filers with a secure HTTPS connection • Establishes a secure administrative session with the NetApp filer regardless of whether HTTP administrative access is enabled on the filer • HTTP is default for backup-archive client when establishing administrative session with NetApp filer • snapdiffhttps option if you need to use a secure HTTPS connection • HTTPS connection only securely transmits data over the administrative session between the backup-archive client and the NetAppfiler • Secured data includes information such as filer credentials, snapshot information, and file names and attributes that are generated by the snapshot differencing process • Very desirable since NetApp API requires sending clear text filer password • Limitations • HTTPS connection is not used to transmit normal file data that is accessed on the filer by the client through file sharing • HTTPS connection does not apply to normal file data transmitted by the client to the TSM server through the normal TSM client/server protocol • HTTPS protocol is not supported on the NetAppvFiler, therefore the snapdiffhttpsoption does not apply to vFilers • snapdiffhttpsoption is available only by using the command-line interface • Not available for use with the backup-archive client GUI • Cannot enter in client options file • Example: dsmcincr z: -snapdiff -snapdiffhttps • Customer and highly secure government environments require a secure connection, as the admin filer authentication sessions has client transmits the clear text filer password

  41. TSM 7.1 Added Client Support • Linux Backup-Archive support for Btrfs (B-tree file system) • A GPL-licensed copy-on-write file system for Linux • Btrfs Address the lack of pooling, snapshots, checksums & integral multi-device spanning in Linux File Systems • No TSM support added specifically for btrfssnapshots • TSM 6.4 and below • Backed up as UNKNOWN • Data and standard ownership/permissions • TSM 7.1 • Fully recognized • Data, ownership/permissions and extended attributes • Image backup • MAC SSL Support • TSM 6.2 added SSL in-flight data encryption • TSM 7.1 extends it to MAC client

  42. Filespace Collocation • Additional levels of collocation available for VM backups. • Allow for collocation by group of filespaces • Reduce amount of time needed to restore from tape • In order of increasing granularity, collocation options are (1) No collocation (2) Collocation by group of nodes (3) Collocation of a single node (4) Collocation by group of filespaces (multiple VMs per tape) (5) Collocation by a single file space (one VM per tape) • Collocation enables server to keep files on a minimal #of sequential-access storage volumes • Reduces # of volume mounts required for restore, retrieve, or recall of a large number of files from the storage pool • Reduces the amount of time required for these operations • Trade-off between retrieve performance (reducing volume mounts) and cost (minimizing the number of volumes)

  43. Features • Recovery from a catastrophic system or hardware failure • Used when all other repair options have been exhausted • Recovers the boot & system drives plus system state • Restores system to a new disk drive using: • OS cd • TSMCLI cd • TSMCLI configuration diskette from TSM or backup sets • ASR, System Volume & System State backed up to TSM server • TSM 7.1 base feature on Windows 7 & 2008 • Backs up directly to TSM server via schedule or command line • Works for both UEFI and BIOS boot architectures • The Unified EFI (UEFI) Specification (previously known as the EFI Specification) defines an interface between an operating system and platform firmware. TSM 7.1 Client – Automated System Recovery for UEFI or EFI • Expedite restore of operating system • Centralize protection of operating systems

  44. HSM for Windows Backup Performance Enhancements If a stub file is selected for incremental backup and no full backup copy already exists, then the full file is retrieved from TSM server used for migration and the full file is backed up. • Prior to release 7.1.0: the retrieval is processed immediately • Now: 7.1.0 B/A client calls HSM for Windows B/A plugin to get restore order and connection information for the migrated file from TSM server • Migrated files stored on virtual or physical tape or sequential access disk • Collected in a list for later processing. • Collects all retrieval requests of a backup/archive job for one volume / filespace • Sorts the list according to connection information and restore order information • Then the listed files are retrieved in the optimal order • Retrieval request for migrated files that don't reside on virtual, physical tape or sequential access disk are not collected in the list • They are retrieved immediately ReFS formatted volume support 7.1 HSM for Windows supports ReFS • ReFS formatted volumes can be managed by HSM for Windows in the same way as NTFS formatted volumes.

  45. HSM for Windows – Administrative Stub File Recall • Administrative Stub File Recall • New commands for command line client: recall, recalllist • recall: User can traverse through file system and recall stub files • recalllist: User can define list of file names and recall stub files • Both commands recall the file content, but do not modify file attributes or file security • Both commands recall stub files from local TSM server and remote TSM servers • Both commands are not available in GUI • Scenario • User knows that some migrated files will be accessed soon and wants to move file content data from TSM server to file system hierarchy level for faster access • If file content data was not modified, re-migration is fast as no file content data needs to be send to the TSM server

  46. HSM for Windows – ACL Independent Migration/Recall • Previously: Windows file security (ACL – Access Control List) always migrated and retrieved • Advantage: ACLs are stored by TSM server and can be restored on demand • Drawback: Migrated ACLs tend to diverge from ACLs on file system • Drawback: ACL updates on TSM server are expensive (full file content retrieval and re-migration to TSM server) • HSM 7.1: The user can select whether to migrate and or retrieve Windows file security ACL • User can choose ACL migration and ACL retrieval independently • If user decides not to migrate ACLs: No re-migration on ACL changes • If user decides not to retrieve ACLs: ACLs on file system are not overwritten • ACLs can be managed solely on the file system (like for HSM file recalls) • ACL migration • User can overwrite default behavior by changing a selection box (GUI) or by passing a command line option (CMD) • Sample: “dsmclc migrate myjob.osj” - apply ACL default from Configuration Wizard • Sample: “dsmclcmigratelist -g myspace -x replace -s myjoblist.txt” - migrate with ACL • ACL retrieve • User can overwrite default behavior by changing the selection box (GUI) or by passing a command line option (CMD) • If security was not stored on TSM server, but specified for retrieval, ACL cannot be restored. No warning or error message is issued in this case. • If ACL is restored or not is shown for each file in GUI and CMD result lists • Sample: “dsmclc retrieve -g myspace-s no e: \myusers\ *.doc” - retrieve .doc-files from e:\users, do not restore ACL 46

  47. HSM for Windows - ACL Miration/Recall Screen Shots • ACL Migration Jobs • ACL Retrieval Report Information 47

  48. TSM for Space Management- Unix • Seamless Failover/Failback • HSM can also handle an automated failover from one TSM Server to another TSM Server (in read only mode) • Improve handling of streaming recall arguments • Improve removal function for multiserver managed file systems • Provide command to verify the state of an HSM node • Functional enhancement for manual recall • Recall files to state resident (only LTFS customers) 48

  49. FastBack for Workstation Overview • Central Administration Console • Provides administrators information on client status and activity to aid in client health monitoring and performance tuning. Also provides tools to easily configure clients and simplify client deployment 49

  50. FastBack for Workstations – New Client Features in 7.1.0 • Support for Windows Servers 2008R2 and 2012 • CDP for Files Starter Edition and OEM have been rebranded • CDP for Files Starter Edition & OEM Edition have been rebranded FastBack for Workstations • CDP for Files will no longer be supported beginning with version 7.1 • Desktop shortcut icon has been added • During installation, users will be asked whether they would like to create a shortcut to FB4W on the desktop • Option for initial backup when changes are made to Files to Protect panel • When a change is made by the user to the files to protect panel, a popup will appear asking the user if they wish to start an initial backup with the new settings to ensure that all the files matching the new settings are backed up • TSM API updated to 7.1 • Lotus Symphony file extensions • Lotus Symphony file extensions have been added to the default configuration panel

More Related