ADSM/TSM QuickFacts in alphabetical order, supplemented thereafter by topic discussions as compiled by Richard Sims, Boston University (www.bu.edu), Information Services & Technology On the web at http://people.bu.edu/rbs/ADSM.QuickFacts Last update: 2013/10/04 This reference was originally created for my own use as a systems programmer's "survival tool", to accumulate essential information and references that I knew I would have to refer to again, and quickly re-find it. In participating in the ADSM-L mailing list, it became apparent that others had a similar need, and so it made sense to share the information. The information herein derives from many sources, including submissions from other TSM customers. This, the information is that which everyone involved with TSM has contributed to a common knowledge base, and this reference serves as an accumulation of that knowledge, largely reflective of the reality of working with the TSM product as an administrator. I serve as a compiler and contributor. This informal, "real-world" reference is intended to augment the formal, authoritative documentation provided by Tivoli and allied vendors, as frequently referenced herein. See the REFERENCES area at the bottom of this document for pointers to salient publications. Command syntax is included for the convenience of a roaming techie carrying a printed copy of this document, and thus is not to be considered definitive or inclusive of all levels for all platforms: refer to manuals for the syntax specific to your environment. Upper case characters shown in command syntax indicates that at least those characters are required, not that they have to be entered in upper case. I realize that I need to better "webify" this reference, and intend to do so in the future. (TSM administration is just a tiny portion of my work, and many other things demand my time.) In dealing with the product, one essential principle must be kept in mind, which governs the way the product operates and restricts the server administrator's control of that data: the data which the client sends to a server storage pool will always belong to the client - not the server. There is no provision on the server for inspecting or manipulating file system objects sent by the client. Filespaces are the property of the client, and if the client decides not to do another backup, that is the client's business: the server shall take no action on the Active, non-expiring files therein. It is incumbent upon the server administrator, therefore, to maintain a relationship with client administrators for information to be passed when a filespace is obsolete and discardable, when it has fallen into disuse. References to the ADSM/TSM database herein are based upon the "classic" database, as used in versions 1 through 5 of the product. Version 6 introduces a distinctly different DB2 database. ? "Match-one" wildcard character used in Include/Exclude patterns to match any single character except the directory separator; it does not match to end of string. Cannot be used in directory or volume names. ?* Wildcard "trick" to force the client to do a Classic Restore, when it would do a No Query Restore if just wildcard '*' were used. Example: /home/user/?* will cause a Classic Restore. Manuals: Admin Guide, "Optimizing Restore Operations for Clients" IBM Technotes: 1142185; 1209563 * (asterisk) "Match-all" wildcard character used in Include/Exclude patterns to match zero or more characters, but it does not cross a directory boundary. Cannot be used in directory or volume names. * (asterisk) In client option files, serves to begin a comment. That is, all text from the asterisk to the end of the line is taken to be commentary, to be ignored by the file parser. Most commonly, the comment starts at the beginning of the line, but may appear anywhere in the line, as when you want to annotate an options spec which appears earlier on that line. Search tip: The manuals may not have the word "asterisk" where usage of the * character is explained. In many cases, the doc may simply include the asterisk in parentheses, such as the phrase "... wildcard character (*) ...". * (asterisk) SQL SELECT: to specify that all columns in a table are being referenced, which is to say the entirety of a row. As in: SELECT COUNT(*) AS - "Number of nodes" FROM NODES *.* Wildcard specification often seen in Windows include-exclude specifications - being a formed-habit holdover from DOS "8.3" filename formats, and which is largely an obsolete concept these days, which should not be used... Explicitly, *.* means any file name with the '.' character anywhere in the name, whereas * means any file name. *SM Wildcard product name first used on ADSM-L by Peter Jodda to generically refer to the ADSM->TSM product - which has become adroit, given the increasing frequency with which IBM is changing the name of the product. See also: ESM; ITSM & (ampersand) Special character in the MOVe DRMedia, MOVe MEDia, and Query DRMedia commands, CMd operand, as the lead character for special variable names. % (percent sign) In SQL: With the LIKE operator, % functions as a wildcard character which means any one or more characters. For example, pattern A% matches any character string starting with a capital A. See also: _ %1, %2, %3, etc. These are symbolic variables within a MACRO (q.v.). _ (underscore) In SQL, in a Select LIKE, serves as a pattern-matching operator to match any single character. TSM's Select does not adhere to the convention of a backslash character serving as an escape char to turn off the wildcardness of the underscore, as in 'Oraclefs\_data%' but it does recognize the ESCAPE operator to allow defining the backslash to be the escape char, as in 'Oraclefs\_data%' ESCAPE '\' See also: % [ "Open character class" bracket character used in Include/Exclude patterns to begin the enumeration of a character class. That is, to wildcard on any of the individual characters specified. End the enumeration with ']'; which is to say, enclose all the characters within brackets. You can code like [abc] to represent the characters a, b, and c; or like [a-c] to accomplish the same thing. Within the character class specification, you can code special characters with a backslash, as in [abc\]de] to include the ']' char. < "Less than" symbol, in TSM Select statement processing. In Windows batch file processing, be aware that this is a file redirection character to it, so you have to quote any TSM expressions so that TSM gets them, rather than Windows Batch. <= "Less than or equal to" symbol, in TSM Select statement processing. In Windows batch file processing, be aware that the '<' is a file redirection character to it, so you have to quote any TSM expressions so that TSM gets them, rather than Windows Batch. > Redirection character in the server administrative command line interface, if at least one space on each side of it, saying to replace the specified output file. There is no "escape" character to render this character "un-special", as a backslash does in Unix. Thus, you should avoid coding " > " in an SQL statement: eliminate at least one space on either side of it. Note that redirection cannot be used in Server Scripts. In Windows batch file processing, be aware that this is a file redirection character to it, too, so you have to quote any TSM expressions so that TSM gets them, rather than Windows Batch. Ref: Admin Ref "Redirecting Command Output" See also: "(CLIB_OPT)/>" for NetWare redirection >> Redirection characters in the server administrative command line interface, if at least one space on each side of it, saying to append to the specified output file. Ref: Admin Ref "Redirecting Command Output" >= "Greater than or equal to" symbol, in TSM Select statement processing. In Windows batch file processing, be aware that the '>' is a file redirection character to it, so you have to quote any TSM expressions so that TSM gets them, rather than Windows Batch. {} Use braces in a file path specification within a query or restore/retrieve to isolate and explicitly identify the file space name (or virtual mount point name) to *SM, in cases where there can be ambiguity. By default, *SM uses the file space with the longest name which matches the beginning of that file path spec, and that may not be what you want. For example: If you have two filespaces "/a" and "/a/b" and want to query "/a/b/somefile" from the /a file system, specify "{/a/}somefile". See: File space, explicit specification || SQL: Logical OR operator. Also effects concatenation, in some implementations, as in: SELECT filespace_name || hl_name || ll_name AS "_______File Name________" Note that not all SQL implementation support || for concatenation: you may have to use CONCAT() instead. - "Character class range" character used in Include/Exclude patterns to specify a range of enumerated characters as in "[a-z]". ] "Close character class" character used in Include/Exclude patterns to end the enumeration of a character class. \ "Literal escape" character used in Include/Exclude patterns to cause an enumerated character class character to be treated literally, as when you want to include a closing square bracket as part of the enumerated string ([abc\]xyz]). ... "Match N directories" characters used in Include/Exclude patterns to match zero or more directories. Example: "exclude /cache/.../*" excludes all directories (and files) under directory "/cache/". ... As a filespace name being displayed at the server, indicates that the client stored the filespace name in Unicode, and the server lacks the "code page" which allows displaying the name in its Unicode form. / (slash) At the end of a filespec, in Unix means "directory". A 'dsmc i' on a filespec ending in a slash says to backup only directories with matching names. To back up files under the directories, you need to have an asterisk after the slash (/*). If you specify what you know to be a directory name, without a slash, *SM will doggedly believe it to be the name of a file - which is why you need to maintain the discipline of always coding directory names with a slash at the end. /... In ordinary include-exclude statements, is a wildcard meaning zero or more directories. /... DFSInclexcl: is interepreted as the global root of DFS. /.... DFSInclexcl: Match zero or more directories (in that "/..." is interepreted as the global root of DFS). /* */ Used in Macros to enclose comments. J The comments cannot be nested and cannot span lines. Every line of a comment must contain the comment delimiters. = (SQL) Is equal to. The SQL standard specifies that the equality test is case sensitive when comparing strings. != (not equal) For SQL, you instead need to code "<>". <> SQL: Means "not equal". $$ACTIVE$$ The name given to the provisional active policy set where definitions have been made (manually or via Import), but you have not yet performed the required VALidate POlicyset and ACTivate POlicyset to commit the provisional definitions, whereafter there will be a policy set named ACTIVE. Ref: Admin Guide See also: Import -1073741819 Sometimes encountered in (TDP) error messages, this is deficient reporting of condition value C0000005, reflecting a memory access problem in Windows (EXCEPTION_ACCESS_VIOLATION). -50 TSM general return code often involved with TCP/IP communication failures, possibly due to networking problems. 0xdeadbeef Some subsystems pre-populate allocated memory with the hexadecimal string 0xdeadbeef (this 32-bit hex value is a data processing affectation) so as to be able to detect that an application has failed to initialize an acquired subset with binary zeroes. Landing on a halfword boundary can obviously lead to getting variant "0xbeefdead". 10.0.0.0 - 10.255.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind some firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 172.16.0.0 - 172.31.255.255; 192.168.0.0 - 192.168.255.255 1500 Server port default number for serving clients. Specify via TCPPort server option and DEFine SERver LLAddress. 1501 The standard client port for backups (schedule). The scheduler process listens on this port for SCHEDMODe PRompted: it does not open this port number for SCHEDMODe POlling. A different number may be used via the TCPCLIENTPort client option. Note that this port exists only when the scheduled session is due: the client does not keep a port when it is waiting for the schedule to come around. CAD does not listen on port 1501 (it uses some random port number, like 32971). See also: 1581; SCHEDMODe; TCPCLIENTPort 1510 Client port for Shared Memory. 1543 ADSM HTTPS port number. 1580 Client admin port. HTTPPort default. See also: Web Admin 1581 Default HTTPPort number for the Web Client and dsmcad TCP/IP port. (The number cutely is the sum of the standard 1501 client port number and "80" as the standard web server port number.) See also: 1501; HTTPPort; WEBports 172.16.0.0 - 172.31.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind some firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 10.0.0.0 - 10.255.255.255; 192.168.0.0 - 192.168.255.255 1900 The IBM mainframe epoch year. The System/370 architecture established the epoch for the TimeOfDay clock as January 1, 1900, 0 a.m. Greenwich Mean Time (GMT). Whereas the ADSM/TSM product derives from a mainframe heritage, it tends to use this epoch for various purposes, as in a date/time of 1900-01-01 00:00:00.000000 is used to designate that a stored object or offsite volume is eligible for recycling. See also: DEACTIVATE_DATE 192.168.0.0 - 192.168.255.255 Private subnet address range, as defined in RFC 1918, commonly used via Network Address Translation behind Asante and other brand firewall routers/switches. You cannot address such a subnet from the Internet: private subnet addresses can readily initiate communication with each other and servers on the Internet, but Internet users cannot initiate contacts with them. See also: 10.0.0.0 - 10.255.255.255; 172.16.0.0 - 172.31.255.255 2 GB limit (2 GB limit) Through AIX 4.1, Raw Logical Volume (RLV) partitions and files are limited to 2 GB in size. It takes AIX 4.2 to go beyond 2 GB. 2105 Model number of the IBM Versatile Storage Server. Provides SNMP MIB software ibm2100.mib . www.ibm.com/software/vss 32-bit client limitations There is only so much memory that a 32-bit client can address, and that will limit a non-NQR: see msg ANS5016E. 32-bit executable in AIX? To discern whether an AIX command or object module is 32-bit, rather than 64-bit, use the 'file' command on it. (This command references "signature" indicators listed in /etc/magic.) If 32-bit, the command will report like: executable (RISC System/6000) or object module not stripped See also: 64-bit executable in AIX? 32-bit vs. 64-bit TSM for AIX See IBM site Technote 1154486 for a table of filesets. TSM 5.1 and 5.2 had two AIX clients: a 32-bit client and a 64-bit client. Version 5.3 has only a 32-bit client, which runs on both 32-bit and 64-bit versions of the AIX operating system. (It is not explained in the doc whether the client will run in 64-bit mode on a 64-bit AIX: if not, that could severely reduce client capabilities.) (There remain separate 32-bit and 64-bit versions of the API.) IBM Technotes: 1230947 3420 IBM's legacy, open-reel, half-inch tape format, circa 1974. Records data linearly in 9 tracks (1 byte plus odd parity). Reels could hold as much as 2400 feet of tape. Capacity: 150 MB Pigment: Iron Models 4,6,8 handle up to 6250 bpi, with an inter-block gap of 0.3". Reel capacity: Varies according to block size - max is 169 MB for a 2400' reel at 6250 bpi. 3466 See also: Network Storage Manager (NSM) 3466, number of *SM servers Originally, just one ADSM server per 3466 box. But as of 2000, multiple, as in allowing the 3466 to perfor DR onto another TSM server. (See http://www. storage.ibm.com/nsm/nsmpubs/nspubs.htm) 3466 web admin port number 1580. You can specify it as part of the URL, like http://______:1580 . 3480, 3490, 3490E, 3590, 3494... IBM's high tape devices (3480, 3490, 3490E, 3590, 3494, etc.) are defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because they are shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX" format. 3480 IBM's first generation of this 1/2" tape cartridge technology, announced March 22, 1984 and available January, 1985. Used a single-reel approach and servo tracking pre-recorded on the tape for precise positioning and block addressing. Excellent start-stop performance. The cartridge technology would endure and become the IBM cartridge standard, prevailing into the 3490 and 3590 models for at least 20 more years. Tracks: 18, recorded linearly and in parallel until EOT encountered (not serpentine like later technologies), whereupon the tape would be full. Recording density: 38,000 bytes/inch Read/write rate: 3 MB/sec Rewind time: 48 seconds Tape type: chromium dioxide (CrO2) Tape length: 550 feet Cartridge dimensions: 4.2" wide x 4.8" high x 1" thick Cartridge capacity: Varies according to block size - max is 208 MB. Transfer rate: 3 MB/s Next generation: 3490 3480 cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3480 tape cartridge AKA "Cartridge System Tape". Color: all gray. Identifier letter: '1'. See also: CST; HPCT; Media Type 3480 tape drive definition Defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because as an IBM "high tape device" it is shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX". 3490 IBM's second generation of this 1/2" tape cartridge technology, circa 1989, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Media type: CST Tracks: 18 (like its 3480 predecessor) recorded linearly and in parallel until EOT encountered (not serpentine like later technologies), whereupon the tape would be full. Transfer rate: 3 MB/sec sustained Capacity: 400 MB physical Tape type: chromium dioxide (CrO2) Tape length: 550 feet Note: Cannot read tapes produced on 3490E, due to 36-track format of that newer technology. Previous generation: 3480 Next generation: 3490E 3490 cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3490 EOV processing 3490E volumes will do EOV processing just before the drive signals end of tape (based on a calculation from IBM drives), when the drive signals end of tape, or when maxcapacity is reached, if maxcapacity has been set. When the drive signals end of tape, EOV processing will occur even if maxcapacity has not been reached. Contrast with 3590 EOV processing. 3490 not getting 2.4 GB per tape? In MVS TSM, if you are seeing your 3490 cartridges getting only some 800 MB per tape, it is probably that your Devclass specification has COMPression=No rather than Yes. Also check that your MAXCAPacity value allows filling the tape, and that at the 3490 drive itself that it isn't hard-configured to prevent the host from setting a high density. 3490 tape cartridge AKA "Enhanced Capacity Cartridge System Tape". Color: gray top, white base. Identifier letter: 'E' Capacity: 800 MB native; 2.4 GB compressed (IDRC 3:1 compression) 3490 tape drive definition Defined in SMIT under DEVICES then TAPE DRIVES; not thru ADSM DEVICES. This is because as an IBM "high tape device" it is shipped with the tape hardware, not with ADSM. Also, these devices use the "/dev/rmtX" format: all other ADSM tape drives are of the format "/dev/mtX". 3490E IBM's third generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Designation: CST-2 Tracks: 36, implemented in two sets of 18 tracks: the first 18 tracks are recorded in the forward direction until EOT is encountered, whereupon the heads are electronically switched (no physical head or tape shifting) and the tape is then written backwards towards BOT. Can read 3480 and 3490 tapes. Capacity: 800 MB physical; 2.4 GB with 3:1 compression. IDRC recording mode is the default, and so tapes created on such a drive must be read on an IDRC-capable drive. Transfer rate: Between host and tape unit buffer: 9 MB/sec. Between buffer and drive head: 3 MB/sec. Capacity: 800 MB physical Tape type: chromium dioxide (CrO2) Tape length: 800 feet Previous generation: 3490 Next generation: 3590 3490E cleaning cartridge Employs a nylon filament ribbon instead of magnetic tape. 3490E Model F 36-track head to read/write 18 tracks bidirectionally. 3494 IBM robotic library with cartridge tapes, originally introduced to hold 3490 tapes and drives, but later to hold 3590 tapes and drives (same cartridge dimensions). Current formal product name: IBM TotalStorage 3494 Tape Library Model HA1 is high availability: instead of just one accessor (robotic mechanism) at one end, it has two, at each end. The 3494 does not maintain statistics for its volumes: it does not track how many times a volume was mounted, how many times it suffered an I/O error, etc. See also: Convenience Input-Output Station; Dual Gripper; Fixed-home Cell; Floating-home Cell; High Capacity Output Facility; Library audit; Library; 3494, define; Library Manager; SCRATCHCATegory; Volume Categories; Volume States 3494, access via web This was introduced as part of the IBM StorWatch facility in a 3494 Library Manager component called 3494 Tape Library Specialist, available circa late 2000. It is a convenience facility, that is read-only: one can do status inquiries, but no functional operations. If at the appropriate LM level, the System Summary window will show "3494 Specialist". 3494, add tape to 'CHECKIn LIBVolume ...' Note that this involves a tape mount. 3494, audit tape (examine its barcode 'mtlib -l /dev/lmcp0 -a -V VolName' to assure physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. 3494, CE slot See: 3494 reserved cells 3494, change Library Manager PC In rare circumstances it will be necessary to swap out the 3494's industrial PC and put in a new one. A major consideration here is that the tape inventory is kept in that PC, and the prospect of doing a Reinventory Complete System after such a swap is wholly unpalatable in that it will discard the inventory and rebuid it - with all the tape category code values being lost, being reset to Insert. So you want to avoid that. (A TSM AUDit LIBRary can fix the category codes, but...) And as Enterprise level hardware and software, such changes should be approached more intelligently by service personnel, anyway. Realize that the LM consists of the PC, the LM software, and a logically separate database - which should be as manageable as all databases can be. If you activate the Service menu on the 3494 control panel, under Utilities you will find "Dump database..." and "Restore database...", which the service personnel should fully exploit if at all possible to preserve the database across the hardware change. (The current LM software level may have to be brought up to the level of the intended, new PC for the database transfer to work well.) 3494, change to manual operation On rare occurrences, the 3494 robot will fail and you need to continue processing, by switching to manual operation. This involves: - Go to the 3494 Operator Station and proceed per the Using Manual Mode instructions in the 3494 OpGuide. Be sure to let the library Pause operation complete before entering Manual Mode. - TSM may have to be told that the library is in manual mode. You cannot achieve this via UPDate LIBRary: you have to define another instance of your library under a new name, with LIBType=MANUAL. Then do UPDate DEVclass to change your 3590 device class to use the library in manual mode for the duration of the robotic outage. - Either watch the Activity Log, doing periodic Query REQuest commands; or run 'dsmadmc -MOUNTmode'. REPLY to outstanding mount requests to inform TSM when a tape is mounted and ready. If everything is going right, you should see mount messages on the tape drive's display and in the Manual Mode console window, where the volser and slot location will be displayed. If a tape has already been mounted in Manual Mode, dismounted, and then called for again, there will be an "*" next to the slot number when it is displayed on the tape drive calling for the tape, to clue you in that it is a recent repeater. 3494, count of all volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK' 3494, count of cartridges in There seems to be no way to determine Convenience I/O Station this. One might think of using the cmd 'mtlib -l /dev/lmcp0 -vqK -s ff10' to get the number, but the FF10 category code is in effect only as the volume is being processed on its way to the Convenience I/O. The 3494 Operator Station status summary will say: "Convenience I/O: Volumes present", but not how many. The only recourse seems to be to create a C program per the device driver manual and the mtlibio.h header file to inspect the library_data.in_out_status value, performing an And with value 0x20 and looking for the result to be 0 if the Convenience I/O is *not* all empty. 3494, count of CE volumes Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' 3494, count of cleaning cartridges Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fffd' 3494, count of SCRATCH volumes Via Unix command: (3590 tapes, default ADSM SCRATCH 'mtlib -l /dev/lmcp0 -vqK -s 12E' category code) 3494, eject tape from See: 3494, remove tape from 3494, empty slots, number of At the OS prompt, perform: 'mtlib -l /dev/lmcp0 -qL' and see the "available cells" number. 3494, identify dbbackup tape See: dsmserv RESTORE DB, volser unknown 3494, inventory operations See: Inventory Update; Reinventory complete system 3494, list all tapes 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494, manually control Use the 'mtlib' command, which comes with 3494 Tape Library Device Driver. Do 'mtlib -\?' to get usage info. 3494, monitor See: mtevent 3494, mount tape In a 3494 tape library, via Unix command: 'mtlib -l /dev/lmcp0 -m -f /dev/rmt? -V VolName' # Absolute drivenm 'mtlib -l /dev/lmcp0 -m -x Rel_Drive# -V VolName' # Relative drive# (but note that the relative drive method is unreliable). If you are going to use one of TSM's tape drives, perform an UPDate DRive to put it offline to TSM, first. 3494, not all drives being used See: Drives, not all in library being used 3494, number of drives in Via Unix command: 'mtlib -l /dev/lmcp0 -qS' 3494, number of frames (boxes) The mtlib command won't reveal this. The frames show in the "Component Availability" option in the 3494 Tape Library Specialist. 3494, partition/share TSM SAN tape library sharing support is only for libraries that use SCSI commands to control the library robotics and the tape management. This does *not* include the 3494, which uses network communication for control. Sharing of the 3494/3590s thus has to occur via conventional partitioning or dynamic drive sharing (which is via the Auto-Share feature introduced in 1999). There is no dynamic sharing of tape volumes: they have to be pre-assigned to their separate TSM servers via Category Codes. Ref: Redpaper "Tivoli Storage Manager: SAN Tape Library Sharing". Redbook "Guide to Sharing and Partitioning IBM Tape Library Data" (SG24-4409) 3494, ping You can ping a 3494 from another system within the same subnet, regardless of whether that system is in the LM's list of LAN-authorized hosts. If you cannot ping the 3494 from a location outside the subnet, it may mean that the 3494's subnet is not routed - meaning that systems on that subnet cannot be reached from outside. 3494, remote operation See "Remote Library Manager Console Feature" in the 3494 manuals. 3494, remove tape from 'CHECKOut LIBVolume LibName VolName [CHECKLabel=no] [REMove=No]' (The command has a FORCE=Yes capability, but that is said not to be for 349x libraries.) To physically cause an eject via AIX command, change the category code to EJECT (X'FF10'): 'mtlib -l /dev/lmcp0 -vC -V VolName -t ff10' The more recent Library Manager software has a Manage Import/Export Volumes menu, wherein Manage Insert Volumes claims ejectability. 3494, RS-232 connect to SP Yes, you can connect a 3494 to an RS/6000 SP via RS-232, though it is uncommon, slow, and of limited distance compare to using ethernet. 3494, status 'mtlib -l /dev/lmcp0 -qL' 3494, steps to set up in ADSM - Define the library - Define the drives in it - Restart the server. (Startup message "ANR8451I 349x library LibName is ready for operations".) (In the ancient past it was also necessary to add "ENABLE3590LIBRARY YES" to dsmserv.opt.) 3494, "Unknown media type" problem An older 3494 will reject a reinserted tape - which the library has used before. The accesor takes the tape from the Convenience I/O portal, carries it to a storage cell, stores it, reads the barcode - then takes the tape from the cell and returns it to the portal, whereupon an Intervention Required state is posted, with the "Unknown media type" problem. This may be due to a barcode label which is defaced or hard to read; but more usually it is because the plastic cell structure has distorted over time, making barcode reading increasingly difficult. Re-teaching the frame may resolve the problem; or the CE may have to do some shimming, or replace the cell array. If this happens on an individual tape, you can often correct it by shifing the little media barcode (the "K" on a 3590K tape) upward by about a millimeter. Note that 3592 tape technology elimiated the separate little media label, to have a single volser + media type barcode label, so no comparable problem with them. 3494 Cell 1 Special cell in a 3494: it is specially examined by the robot after the doors are closed. You would put here any tape manually removed from a drive, for the robot to put away. It will read the serial name, then examine the cell which was that tape cartridge's last home: finding it empty, the robot will store the tape there. The physical location of that cell: first frame, inner wall, upper leftmost cell (which the library keeps empty). 3494 cells, total and available 'mtlib -l /dev/lmcp0 -qL' lines: "number of cells", "available cells". 3494 cleaner cycles remaining 'mtlib -l /dev/lmcp0 -qL' line: "avail 3590 cleaner cycles" 3494 cleaning cartridge The 3494 keeps track of the number od times each cleaning cartridge has been used and automatically ejects exhausted cartridges. See: Cleaner Cartridge, 3494 3494 connectivity A 3494 can be simultaneously connected via LAN and RS-232. 3494 device driver See: atldd 3494 diagnosis See: trcatl 3494 ESCON device control Some implementations may involve ESCON connection to 3490 drives plus SCSI connection to 3590 drives. The ESCON 3490 ATL driver is called mtdd and the SCSI 3590 ATL driver was called atldd, and they have shared modules between them. One thus may be hesitant to install atldd due to this "sharing". In the pure ESCON drive case, the commands go down the ESCON channel, which is also the data path. If you install atldd, the commands now first go to the Library Manager, which then reissues them to those drives. Thus, it is quite safe to install atldd for ESCON devices. 3494 foibles The 3494 is prone to a problem evidenced by the accessor leaving cartridges partly sticking out of cells, which it subsequently collides with such that the library manager performs an emergency accessor power-down, resulting in an Int Req condition, and all the lights on the black-side panel being out. The cause is the cartridge being inserted into the cell too high, such that the plastic prongs in the side of the cell don't get a chance to engage the cartridge, so it comes partway out on the accessor's "palm" as the gripper withdraws. The accessor is out of alignment relative to the cells. Correction is usually achieved by re-teaching locations in the frame, but may require adjusting the height of the involved cell assembly. ---------- The 3494 can also enter Intervention Required state with an Accessor failure due to the grease on the elevating helix having dried out and solidified, whre the motors are then overtaxed and generate an overload condition, sensed by the library manager. 3494 Gripper factoids 3494 documentation numbers the grippers 1 and 2. The mtlibio.h programming header file numbers them 0 and 1. The inconsistency can generate confusion when reporting problems. If the first gripper is defunct, the 3494 can carry on, but there are consequences... When outputting tapes to the 10-slot Convenience I/O Station, the top two positions can no longer be accessed, and the portal will be reported as full when there are only 8 tapes in it. 3494 Gripper Error Recovery Cell Cell location 1 A 3 if Dual Gripper installed; 1 A 1 if Dual Gripper *not* installed. Also known as the "Error Recovery Cell". Ref: 3494 Operator Guide. 3494 Gripper Failure - artificial! You can get a reported failure of Gripper 1 in the 3494, when in fact it's okay. The robotics try to get a cartridge out of a cell with gripper 1 and, where that was too arduous, it tried again with gripper 2, which happened to work so it tried gripper 1 again, and still no go: so the library concluded that the gripper was a problem. But it's not. The actual problem is droopy, misaligned plastic cell assemblies, which need to be replaced. 3494 inaccessible (usually after Check for the following: just installed) - That the 3494 is in an Online state. - In the server, that the atldd software (LMCPD) has been installed and that the lmcpd process is running. - That your /etc/ibmatl.conf is correct: if a TCP/IP connection, specify the IP addr; if RS/232, specify the /dev/tty port to which the cable is attached. - If a TCP/IP connection, that you can ping the 3494 by both its network name and IP address (to assure that DNS was correctly set up in your shop). - If a LAN connection: - Check that the 3494 is not on a Not Routed subnet: such a router configuration prevents systems outside the subnet from reaching systems residing on that subnet. - A port number must be in your host /etc/services for it to communicate with the 3494. By default, the Library Driver software installation creates a port '3494/tcp' entry, which should matches the default port at the 3494 itself, per the 3494 installation OS/2 TCP/IP configuration work. - Your host needs to be authorized to the 3494 Library Manager, under "LAN options", "Add LAN host". (RS/232 direct physical connection is its own authorization.) Make sure you specify the full host network name, including domain (e.g., a.b.com). If communications had been working but stopped when your OS was updated, assure that it still has the same host name! - If an RS/232 connection: - Check the Availability of your Direct Attach Ports (RS-232): the System Summary should show them by number, if Initialized, in the "CU ports (RTIC)" report line. If not, go into Service Mode, under Availability, to render them Available. - Connecting the 3494 to a host is a DTE<->DTE connection, meaning that you must employ a "null modem" cable or connector adapter. - Certainly, make sure the RS-232 cable is run and attached to the port inside the 3494 that you think it is. - Try performing 'mtlib' queries to verify, outside of *SM, that the library can be reached. (In the ancient past it was also necessary to add "ENABLE3590LIBRARY YES" to dsmserv.opt.) 3494 Intervention Required detail The only way to determine the nature of the Int Req on the 3494 is to go to its Operator Station and see, under menu Commands->Operator intervention. There is no programming interface available to allow you to get this information remotely. Odd note: A vision failure does not result in an Int Req! 3494 IP address, determine Go to the 3494 control panel. From the Commands menu, select "LAN options", and then "LM LAN information". 3494 Manual Mode If the 3494's Accessor is nonfunctional you can operate the library in Manual Mode. Using volumes in Manual Mode affects their status: The 3494 redbook (SG24-4632) says that when volumes are used in Manual Mode, their LMDB indicator is set to "Manual Mode", as used to direct error recovery when the lib is returned to Auto mode. This is obviously necessary because the location of all volumes in the library is jeopardized by the LM's loss of control of the library. The 3494 Operator Guide manual instructs you to have Inventory Update active upon return to Auto mode, to re-establish the current location of all volumes. 3494 microcode level See: "Library Manager, microcode level" 3494 tape drives microcode loading Can be performed via the Library Manager (by inserting the microcode CD-ROM into the OS/2 industrial computer and then going through menus) - but this is a very slow method, which takes about 90 minutes an hour, because the data transfer has to occur over RS-422 ARTIC connection. Much faster is to get the microcode onto the host, then transfer over the drive's SCSI or FC host connection (as through tapeutil). 3494 port number See: Port number, for 3494 communication 3494 problem: robot is dropping This has been seen where the innards of cartridges the 3494 have gone out of alignment, for any of a number of reasons. Re-teaching can often solve the problem, as the robot re-learns positions and thus realigns itself. 3494 problem: robot misses some During its repositioning operations, the fiducials - but not all robot attempts to align itself with the edges of each fiducial, but after dwelling on one it keeps on searching, as though it didn't see it. This operation involves the LED, which is carried on the accessor along with the laser (which is only for barcode reading). The problem is that the light signal involved in the sensing is too weak, which may be due to dirt, an aged LED, or a failing sensor. The signal is marginal, so some fiducials are seen, but not others. 3494 problems See also "3494 OPERATOR STATION MESSAGES" section at the bottom of this document. 3494 reserved cells A 3494 minimally has two reserved cells: 1 A 1 Gripper error recovery (1 A 3 if Dual Gripper installed). 1 A 20 CE cartridge (3590). 1 A 19 is also reserved for 3490E, if such cartridges participate. _ K 6 Not a cell, but a designation for a tape drive on wall _. 3494 scratch category, default See: DEFine LIBRary 3494 sharing Can be done with TSM 3.7+, via the "3494SHARED YES" server option; but you still need to "logically" partition the 3494 via separate tape Category Codes. Ref: Guide to Sharing and Partitioning IBM Tape Library Dataservers, SG24-4409. Redbooks: Tivoli Storage Manager Version 3.7.3 & 4.1: Technical Guide, section 8.2; Tivoli Storage Manager SAN Tape Library Sharing. See also: 3494SHARED; DRIVEACQUIRERETRY; MPTIMEOUT 3494 sluggish The 3494 may be taking an unusually long time to mount tapes or scan barcodes. Possible reasons: - A lot of drive cleaning activity can delay mounts. (A library suddenly exposed to a lot of dust could evidence a sudden surge in cleaning.) A shortage of cleaning cartridges could aggravate that. - Drive problems which delay ejects or positioning. - Library running in degraded mode. - lmcpd daemon or network problems which delay getting requests to the library. - See if response to 'mtlib' commands is sluggish. This can be caused by DNS service problems to the OS2 embedded system. (That PC is typically configured once, then forgotten; but DNS servers may change in your environment, requiring the OS2 config to need updating.) Use the mtlib command to get status on the library to see if any odd condition, and visit the 3494 if necessary to inspect its status. Observe it responding to host requests to gauge where the delay is. 3494 SNMP support The 3494 (beginning with Library Manager code 518) supports SNMP alert messaging, enabling you to monitor 3494 operations from one or more SNMP monitor stations. This initial support provides more than 80 operator-class alert messages covering: 3494 device operations Data cartridge alerts Service requests VTS alerts See "SNMP Options" in the 3494 Operator Guide manual. 3494 status 'mtlib -l /dev/lmcp0 -qL' 3494 Tape Library Specialist Provides web access to your 3494 LM. Requires that the LM PC have at least 64 MB of memory, be at LM code level 524 or greater, and have FC 5045 (Enhanced Library Manager). 3494 tapes, list 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494 TCP/IP, set up This is done during 3494 installation, in OS/2 mode, upon invoking the HOSTINST command, where a virtual "flip-book" will appear so that you can click on tabs within it, including a Network tab. After installation, you could go into OS/2 and there do 'cd \tcpip\bin' and enter the command 'tcpipcfg' and click in the Network tab. Therein you can set the IP address, subnet mask, and default gateway. 3494 vision failure May be simply a dusty lens, where cleaning it will fix the problem. 3494 volume, delete from Library A destroyed tape which the 3494 spits Manager database out will remain in the 3494 library's database indefinitely, with a category code x'FFFA'. To get rid of that useless entry, use the FFFB (Purge Volume) category code, as in: 'mtlib -l /dev/lmcp0 -vC -V VolName -t FFFB' Sometimes the response to that operation is: Change Category operation Failed (errno = 5), ERPA code - 27, Command Reject. I've found that you can get past that by first setting the volume category code to FF10 (Convenience Eject), then reattempt the FFFB. See also: Purge Volume category; Volume Categories 3494 volume, list state, class, 'mtlib -l /dev/lmcp0 -vqV -V VolName' volser, category 3494 volume, last usage date 'mtlib -l /dev/lmcp0 -qE -uFs -V VolName' 3494 volumes, list 'mtlib -l /dev/lmcp0 -qI' (or use options -vqI for verbosity, for more descriptive output) 3494SHARED To improve performance of allocation of 3590 drives in the 3494, introduced by APAR IX88531... ADSM was checking all available drives on a 3494 for availability before using one of them. Each check took 2 seconds and was being performed twice per drive, once for each available drive and once for the selected drive. This resulted in needless delays in mounting a volume. The reason for this is that in a shared 3494 library environment, ADSM physically verifies that each drive assigned to ADSM is available and not being used by another application. The problem is that if ADSM is the only application using the assigned drives, this extra time to physically check the drives is not needed. This was addressed by adding a new option, 3494SHARED, to control sharing. Selections: No (default) The, the 3494 is not being shared by any other application. That is, only one or more ADSM servers are accessing the 3494. Yes ADSM will select a drive that is available and not being used by any other application. You should only enable this option if you have more than two (2) drives in your library. If you are currently sharing a 3494 library with other application, you will need to specify this option. See also: DRIVEACQUIRERETRY; MPTIMEOUT 3495 Predecessor to the 3494, containing a GM robot, like used in car assembly. 3570 Introduced in 1995, the 3570 Tape Subsystem is based on the same technology that would later be used in the IBM 3590 High Performance Tape Subsystem (though in no way compatible with it). The 3570 was the first IBM tape product to use head servo control, or "servoing". The method employed is timing-based servoing (TBS), in which the duration between pulses from obliquely written patterns contains the position information. Tapes were factory-formatted with the TBS tracks. The head is mounted on a voice-coil-driven positioning device, and dedicated sensors in the head read the servo-track data. A position error signal controls the voice coil. The actuator performs two functions, moving the head to specific track locations on the tape and maintaining alignment between the head and the servo tracks. This technology facilitates intensive read and write operations, providing faster data access than other tape technologies with a drive time to read/write data of eight seconds from cassette insertion. The 3570 also incorporates a high-speed search function. The tape drive reads and writes data in a 128-track format, four tracks at a time in the initial model, eight tracks at a time in later models. Data is written using an interleaved serpentine longitudinal recording format starting at the center of the tape (mid-tape load point) and continuing to near the end of the tape. The head is indexed to the next set of tracks and data is written back to the mid-tape load point. This process continues in the other direction until the tape is full. Cartridge: 8mm tape, housed in a twin-hub tape cassette that is approximately half the size of the 3490/3590 cartridge tapes, opening at the end. Initial cassette capacity was 5 GB uncompressed and up to 15G per cassette with LZ1 data compaction. Also called "Magstar MP" (where the MP stands for Multi-Purpose), supported by the Atape driver. Think "3590, Jr." The tape is half-wound at load time, so can get to either end of the tape in half the time than if the tape were fully wound. Cartridge type letter: 'F' (does not participate in the volser). An early problem of "Lost tension" was common, attributed to bad tapes, rather than the tape drives. *SM library type: SCSI Library Product summary: http://www.ibm.com/ibm/history/ exhibits/storage/storage_3570.html Manuals: http://www.ibm.com/servers/storage/ support/tape/3570/installing.html 3570 "tapeutil" for NT See: ntutil 3570, to act as an ADSM library Configure to operate in Random Mode and Base Configuration. This allows ADSM to use the second drive for reclamation. (The Magstar will not function as a library within ADSM when set to "automatic".) The /dev/rmt_.smc SCSI Media Changer special device allows library style control of the 3570. 3570/3575 Autoclean This feature does not interfere with ADSM: the 3570 has its own slot for the cleaner that is not visible to ADSM, and the 3575 hides the cleaners from ADSM. 3570 configurations Base: All library elements are available to all hosts. In dual drive models, it is selected from Drive 1 but applies to both drives. This config is primarily used for single host attachment. (Special Note for dual drive models: In this config, you can only load tapes to Drive 1 via the LED display panel as everything is keyed off of Drive 1. However, you may load tapes to Drive 2 via tapeutil if the Library mode is set to 'Random'.) Split: This config is most often used when the library unit is to be twin-tailed between 2 hosts. In this config, the library is "split" into 2 smaller half size libraries, each to be used by only one host. This is advantageous when an application does not allow the sharing of one tape drive between 2 hosts. The "first/primary" library consists of: Drive 1 The import/export (priority) cell The right most magazine Transport Mechanism The "second" library consists of: Drive 2 The leftmost magazine Transport Mechanism 3570 Element addresses Drive 0 is element 16, Drive 1 is element 17. 3570 mode A 3570 library must be in RANDOM mode to be usable by TSM: AUTO mode is no good. 3570 tape drive cleaning Enable Autocleaning. Check with the library operator guide. The 3570 has a dedicated cleaning tape tape storage slot, which does not take one of the library slots. 3573 See: TS3100 3575 3570 library from IBM. Attachment via: SCSI-2. As of early 2001, customers report problem of tape media snapping: the cartridge gets loaded into the drive by the library but it never comes ready: such a cartridge may not be repairable. Does not have a Teach operation like the 3494. Ref: Red book: Magstar MP 3575 Tape Library Dataserver: Muliplatform Implementation. *SM library type: SCSI Library 3575, support C-Format XL tapes? In AIX, do 'lscfg -vl rmt_': A drive capable of supporting C tapes should report "Machine Type and Model 03570C.." and the microcode level should be at least 41A. 3575 configuration The library should be device /dev/smc0 as reflected in AIX command 'lsdev -C tape'...not /dev/lb0 nor /dev/rmtX.smc as erroneously specified in the Admin manuals. 3575 tape drive cleaning The 3575 does NOT have a dedicated cleaning tape storage slot. It takes up one of the "normal" tape slots, reducing the Library capacity by one. 357x library/drives configuration You don't need to define an ADSM device for 357x library/drives under AIX: the ADSM server on AIX uses the /dev/rmtx device. Don't go under SMIT ADSM DEVICES but just run 'cfgmgr'. Once the rmtx devices are available in AIX, you can define them to ADSM via the admin command line. For example, assuming you have two drives, rmt0 and rmt1, you would use the following adsm admin commands to define the library and drives: DEFine LIBRary mylib LIBType=SCSI DEVice=/dev/rmt0.smc DEFine DRive mylib drive1 DEVice=/dev/rmt0 ELEMent=16 DEFine DRive drive mylib drive2 DEVice=/dev/rmt1 ELEMent=17 (you may want to verify the element numbers but these are usually the default ones) 3575 - L32 Magstar Library contents, Unix: 'tapeutil -f /dev/smc0 inventory' list 358x drives These are LTO Ultrium drives. Supported by IBM Atape device driver. See: LTO; Ultrium 3580 IBM model number for LTO Ultrium tape drive. A basic full-height, 5.25 drive SCSI enclosure; two-line LCD readout. Flavors: L11, low-voltage differential (LVD) Ultra2 Wide SCSI; H11, high-voltage differential SCSI. Often used with Adaptec 29160 SCSI card (but use the IBM driver - not the Adaptec driver). The 3580 Tape Drive is capable of data transfer rates of 15 MB per second with no compression and 30 MB per second at 2:1 compression. (Do not expect to come close to such numbers when backing up small files: see "Backhitch".) Review: www.internetweek.com/reviews00/ rev120400-2.htm The Ultrium 1 drives have had problems: - Tapes would get stuck in the drives. IBM (Europe?) engineered a field compensation involving installing a "clip" in the drive. This is ECA 009, which is not a mandatory EC; to be applied only if the customer sees frequent B881 errors in the library containing the drive. The part number is 18P7835 (includes tool). Taks about half an hour to apply. One customer reports having the clip, but still problems, which seems to be inferior cartridge construction. - Faulty microcode. As evidenced in late 2003 defect where certain types of permanent write errors, with subsequent rewind command, causes an end of data (EOD) mark to be written at the BOT (beginning of tape). See also: LTO; Ultrium 3580 (LTO) cleaning cartridge life The manual specifies how much you should expect out of a cleaning cartridge: "The IBM TotalStorage LTO Ultrium Cleaning Cartridge is valid for 50 uses." (2003 manual) Customers report that if you insert a cleaning tape when the drive is not seeking to be cleaned, that it will not clean the drive. However, the usage count for the cleaning cartridge will still be incremented. (This behavior is subject to microcode changes.) 3580 (LTO Ultrium) microcode See IBM site Readme S1002360. (3580 microcode) (Search on +"Drivecode Level" ) 3580 volser Whereas tape technology which originated in the mainframe environment used the term "volser" to refer to the Volume Identifier, Ultrium uses the term "volid" (descriptively, "volume name"). The Volume Identifier consists of six ASCII characters, left-justified, where the ASCII characters may be A-Z (41h-5Ah), 0-9 (30h-39h) and the combinations of "CLN" and "DG ". The Volume Identifier may be followed by two more ASCII characters, which are the Media Identifier, where the first character is 'L' (identifying the cartridge as LTO), and the second character identifies the generation and type of cartridge. The TSM Implementation Guide redbook says: "Although IBM Tivoli Storage Manager allows you to use a volume identification longer than six characters, we strongly recommend that you use up to six alphanumeric characters for the label (also known as VOLSER or Volume ID). This should be compatible with other ANSI styled label systems." Note that IBM tape systems in general - including the modern 3592 - utilize six characters for the volser. AIX: Uses the Atape driver, and that may provide SMIT with the ability for the customer to set the barcode length: smit --> Devices --> Tape Drive, Then select Change/Show and then your Medium Changer, where the last parameter on the Change screen says: "TSM Bacrcode Length for Ultrium 1/Ultrium 2 Media". IBM Technotes: 1144913, 1153376, 1154231 Ref: "IBM LTO Ultrium Cartridge Label Specification", IBM site item S7000429 3581 IBM model number for LTO Ultrium tape drive with autoloader. Obviously, no barcode reader. Houses one drive and seven cartridge slots: five in front, two in the rear. Can be uses as a TSM library, type SCSI. Requires the IBM Device Driver. Models, withdrawn from marketing: F28, L28 Discontinued 2006/05/16 F38, L38 Discontinued 2006/10/27 The IBM TotalStorage 3581 Tape Autoloader has been replaced by the IBM System Storage TS3100 Tape Library Express Model. See also: Backhitch; LTO; Ultrium 3581, configuring under AIX Simply install the device driver and you should be able to see both the drive and medium changer devices as SCSI tape devices (/dev/rmt0 and /dev/smc0). The original 3581 was rather boxy: the more modern version is low-profile, 2U. When configuring the library and drive in TSM, use device type "LTO", not SCSI. Ref: TSM 4.1.3 server README file 3582 IBM LTO Ultrium cartridge tape library. Up to 2 Ultrium 2 tape drives and 23 tape cartridges. Requires Atape driver on AIX and like hosts: Atape level 8.1.3.0 added support for 3582 library. Reportedly not supported by TSM 5.2.2. See also: Backhitch; LTO; Ultrium 3583 IBM LTO Ultrium cartridge tape library. Formal name: "LTO Ultrium Scalable Tape Library 3583". (But it is only slightly scalable: look to the 3584 for higher capacity.) Six drives, 18 cartridges. Can have up to 5 storage columns, which the picker/mounter accesses as in a silo. Column 1 can contain a single-slot or 12-slot I/O station. Column 2 contains cartridge storage slots and is standard in all libraries. Column 3 contains drives. Columns 4 and 5 may be optionally installed and contain cartridge storage slots. Beginning with Column 1 (the I/O station column), the columns are ordered clockwise. The three columns which can house cartridges do so with three removable magazines of six slots each: 18 slots per column, 54 slots total. Add two removable I/O station magazines through the door and one inside the door to total 72 cells, 60 of which are wholly inside the unit. total cartridge storage slots. (There are reports that 2 of those 60 slots are reserved for internal tape drive mounts, though that doesn't show up in the doc.) Model L72: 72 cartridge storage slots As of 2004 handles the Ultrium 2 or Ultrium 1 tape drive. The Ultrium 2 drive can work with Ultrium 1 media, but at lesser speeds (see "Tape Drive Performance" in the 3583 Setup and Operator Guide manual. Cleaning tapes: They live in the 3 reserved, nonaddressable slots at the top of columns 2, 4, and 5. (The barcode reader cannot get to those slots.) Thus, while there are 19 slots available in those storage columns, the server can only access 18 of those slots. http://www.storage.ibm.com/hardsoft/tape /pubs/pubs3583.html *SM library type: SCSI Library The 3583 had a variety of early problems such as static buildup: the picker would run fine for a while, until enough static built up, then it would die for no reason apparent to the user. The fix was to replace the early rev picker with a newer design. Reports indicate that IBM is rebranding what is actually an ADIC 100 library: IBM and Dell OEM this library from ADIC. Beware that may replacement parts are refurbished rather than new. The 3583 was problematic for many customers (largely due to endless problems with its loader, and was discontinued in mid 2006. (In contrast, the 3584 has been a good product.) See also: 3584; Accelis; L1; Ultrium 3583, convert I/O station to slots Via Setup->Utils->Config. Then you have to get the change understood by TSM - and perhaps the operating system. A TSM AUDit LIBRary may be enough; or you may have to incite an operating system re-learning of the SCSI change, which may involve rebooting the opsys. 3583 cleaning cartridge Volser must start with "CLNI" so that the library recognizes the cleaning tape as such (else it assumes it's a data cartridge). The cleaning cartridge is stored in any slot in the library. Recent (2002/12) updates to firmware force the library to handle cleaning itself and hide the cleaning cartridges from *SM. 3583 door locked, never openable See description of padlock icon in the 3583 manual. A basic cause is that the I/O station has been configured as all storage slots (rather than all I/O slots). In a Windows environment, this may be cause by RSM taking control of the library: disable RSM when is it not needed. This condition may be a fluke which power-cycling the library will undo. 3583 driver and installation The LTO/Ultrium tape technology was jointly developed by IBM, and so they provide a native device driver. In AIX, it is supported by Atape; in Solaris, by IBMTape; in Windows, by IBMUltrium; in HP-UX, by atdd. 1. Install the Ultrium device driver, available from ftp://ftp.software.ibm.com/storage /devdrvr// directory 2. In NT, under Tape Devices, press ESC on the first panel. 3. Select the Drivers tab and add your library. 4. Select the 3583 library and click on OK. 5. Press Yes to use the existing files. 3583 microcode search IBM's parlance for the microcode, as evidenced in their web pages, is "drivecode", so search on that to turn up listings, such as "3583 LTO Ultrium 2 Tape Drive README". 3583 microcode updating IBM site Hints and Tips document S1001315 "Updating Scalable Tape Library Firmware" has good instructions on doing this. Caution: One customer reports that not opening the library device for read/write prior to putting the library in to firmware update mode can result in not being able to talk to the library of even reboot it. (If encountered, try pulling the plug on the library for at least several minutes.) 3583 "missing slots" If not all storage cells in the library are usable (the count of usable slots is short), it can be caused by a corrupt volume whose label cannot be read during an AUDit LIBRary. You may have to perform a Restore Volume once the volume is identified. 3583 monitoring TapeAlert is available. 3583 password 3583 menus are protected by a customer-established password. There seems to be no documented way of resetting the password, to start over, if the password is lost, however. (This is a nasty problem on IBM and other vendor equipment in general, which can result in being locked out.) 3584 The high end of IBM's mid-range tape library offerings. Formal name: LTO UltraScalable Tape Library On May 9, 2006, the IBM TotalStorage 3584 Tape Library was renamed to the IBM System Storage TS3500 Tape Library: "3584" survives as the "machine type". Initially housed LTO Ultrium drives and cartridges; but as of mid 2004 also supports 3592 J1A. Twelve drives, 72 cartridges. Can also support DLT. Interface: Fibre Channel or SCSI The 3584 is a "SCSI library": the host has to direct its actions. If you're accustomed to the 3494 and its automated actions, don't expect anything like that in the 3584. As one customer said: The 3584 is pretty much an automatic tape mounter with bulk storage and no real 'smarts'. Its robotics are reported to be much faster than those in the 3494, making for faster mounting of tapes. In Unix, the library is defined as device /dev/smc0, and by default is LUN 1 on the lowest-number tape drive in the partition - normally drive 1 in the library, termed the Master Drive by CEs. (Remove that drive and you suffer ANR8840E trying to interact with the library.) In AIX, 'lsdev -Cc tape' should show all the devices. The 3584 has a web interface ("Specialist"), but the library control panel cannot be seen from it. The first frame contains an I/O pass-through portal for transferring tapes into and out of the library, without opening doors. The 3584 calls this its "I/O Station". IBM configuration Technote 1053638 refers to it as "Import/Export Slots". Inventorying: You can cause the library to reassess its inventory by opening and closing the door on each frame that you want it to re-inventory, as a physical method. Alternately, use its control panel or web page to initiate a library scan (Library -> Frames). Technotes: 1168963 As of early 2007, other library choices are TS3310 or the TS3500. See also: LTO; Ultrium 3584, 3592, 8-character volname On AIX for 3592 drives you have to enable 8 characters at the 3584 and also go into 'smitty devices' > Library Media Changer (smc0), amounting to doing chdev -l smc0 -a tsm_barcode_len='8' 3584 bar code reading The library can be set to read either just the 6-char cartridge serial ("normal" mode) or that plus the "L1" media identifier as well ("extended" mode). Should barcode reading have problems in the 3584, there is an Adjust Scanner Speed capability. Prolonged use of a slower scanning speed is abnormal. Barcode scanning cannot be arbitrarily turned off, as it can in some lesser libraries. 3584 Checkin/Checkout replies To avoid having to reply to Checkin or Checkout operations: Before TSM5.3: Use "Search=BULK" with Checkin, "Remove=BULK" with Checkout. TSM5.3+: On Checkin, use WAITTime=0 to prevent the prompt. 3584 cleaning cartridge Volser must start with "CLNI" or "CLNU" so that the library recognizes the cleaning tape as such (else it assumes it's a data cartridge). The cleaning cartridge is stored in any data-tape slot in the library (but certainly not the Diagnostic Tape slot). Follow the 3584 manual's procedure for inserting cleaning cartridges. Auto Clean should be activated, to allow the library to manage drive cleanings on its own. In this mode, TSM does not track Cleanings Left in Libvolumes, so the library has to be asked about that information, as via the operator panel: Main Menu > Usage Statistics > Cleaning Cartridge Usage Or, from the library Web page: Physical Library > Cleaning Cartridges. (You may be able to use the free 'wget' command to get this data into your AIX or like operating system.) The cleaning tape is valid for 50 uses. When the cartridge expires, the library displays an Activity screen like the following: Remove CLNUxxL1 Cleaning Cartridge Expired 3584 firmware Download via IBM Web page S4000043. IBM Technotes: S1002310 ("LTO 3584 UltraScalable Tape Library README") 3584 firmware level Is displayed in the upper left corner of the Activity Screen, e.g., "Version 3314". In AIX you can do lscfg -vpl smcX; lscfg -vpl rmtX In Linux you can do like 'mtx -f /dev/sg3 inquiry' to get Product Type, Vendor ID, Product ID, Revision, Attached Changer. You can also use the 'tapeutil' "inquiry" subcommand to get Vendor ID, Product ID, Product Revision Level. 3584 I/O Station occupied? If you're into C programming, I believe that you can directly query the state of the "Import/Export Slots" elements, per the IBM Ultrium Device Drivers Programming Reference manual: search for their "Media Present" example. The lbtest utility may also expose data which indicates tape presence. 3584 microcode See: 3584 firmware 3583 monitoring TapeAlert is available, and SNMP Trap monitoring opportunities are available. 3584 Remote Support The official name of the CE hardware support communication link to the 3584, aka Call Home / Heartbeat Call Home / MRPD. It is phone-line based, and consists of a 56Kbps modem that rests in the bottom of the 3584 frame. A cable connects from the frame to the modem, with other feature codes depending on the number of frames constituting the 3584. The frame connection point may vary. When the unit calls IBM, a PMR is automatically created, giving hardware support an opportunity to plan its correction. The CE should have a laptop with CETOOL loaded which can be used to connect and test. The unit is supposed to phone home weekly, as a routine line check; it also phones out an hour after any POR. "Problems" includes lib issues and drive load/unload but NOT drive r/w or server interface problems. It also calls when upgrades/changes are done. Info collected includes: - machine type, model, s/n of each frame - type, host attachment, s/n of each drive - type and firmware level of each lib note card, drive control path, canister card, and hot swap drive power supply - remote service (Call Home & WTI switch features) - SNMP - IBM Specialist - type and number of i/o stations - type of grippers - feature codes Further details are in the (voluminous) hardware MIM (maintenance info manual) that is likely stuffed in the back cover or shelf somewhere. Look for 'Remote Support' under Intro section and 'Installing Remote Support' under Install section -- it also shows the CE tool screens. These manuals are updated/redistributed periodically, perhaps annually. They are hardcopy only, and are supposed to stay with/in the 3584. 3584 WWNs On a 3584, each drive has a unique WWNN that incorporates both the library serial number and information about where the drive is in the library. (Control path drives have a second LUN for communicating with the library.) 3590 IBM's fourth generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Uses magneto-resistive heads for high density recording. Introduced: 1995 Tape length: 300 meters (1100 feet) Tracks: 128, written 16 at a time, in serpentine fashion. The head contains 32 track writers: As the tape moves forward, 16 tracks are written until EOT is encountered, whereupon electronic switching causes the other 16 track writers in the heads to be used as the tape moved backwards towards BOT. Then, the head is physically moved (indexed) to repeat the process, until finally all 128 tracks are written as 8 interleaved sets of 16 tracks. Transfer rate: Between host and tape unit buffer: 20 MB/sec with fast, wide, differential SCSI; 17 MB/sec via ESCON channel interface. Between buffer and drive head: 9 MB/sec. Pigment: MP1 (Metal Particle 1) Note that "3590" is a special, reserved DEVType used in 'DEFine DEVclass'. Cartridge type letter: 'J' (does not participate in the volser). See publications references at the bottom of this document. See also: 3590E Previous generation: 3490E Next generation: 3590E See also: MP1 3590, AIX error messages If a defective 3590 is continually putting these out, rendering the drive Unavailable from the 3494 console will cause the errors to be discontinued. 3590, bad block, dealing with Sometimes there is just one bad area on a long, expensive tape. Wouldn't it be nice to be able to flag that area as bad and be able to use the remainder of the tape for viable storage? Unfortunately, there is no documented way to achieve this with 3590 tape technology: when just one area of a tape goes badk the tape becomes worthless. 3590, handling DO NOT unspool tape from a 3590 cartridge unless you are either performing a careful leader block replacement or a post-mortem. Unspooling the tape can destroy it! The situation is clearances: The spool inside the cartridge is spring-loaded so as to keep it from moving when not loaded. The tape drive will push the spool hub upward into the cartridge slightly, which disengages the locking. The positioning is exacting. If the spool is not at just the right elevation within the cartridge, the edge of the tape will abrade against the cartridge shell, resulting in substantial, irreversible damage to the tape. 3590, write-protected? With all modern media, a "void" in the sensing position indicates writing not allowed. IBM 3480/3490/3590 tape cartridges have a thumbwheel (File Protect Selector) which, when turned, reveals a flat spot on the thumbwheel cylinder, which is that void/depression indicating writing not allowed. So, when you see the dot, it means that the media is write-protected. Rotate the thumbwheel away from that to make the media writable. Some cartridges show a padlock instead of a dot, which is a great leap forward in human engineering. See also: Write-protection of media 3590 barcode Is formally "Automation Identification Manufacturers Uniform Symbol Description Version 3", otherwise known as Code 39. It runs across the full width of the label. The two recognized vendors: Engineered Data Products (EDP) Tri-Optic Wright Line Tri-Code Ref: Redbook "IBM Magstar Tape Products Family: A Practical Guide", topic Cartridge Labels and Bar Codes. See also: Code 39 3590 Blksize See: Block size used for removable media 3590 capacity See: 3590 'J'; 3590 'K' See also: ESTCAPacity 3590 cleaning See: 3590 tape drive cleaning 3590 cleaning interval The normal preventve maintenance interval for the 3590 is once every 150 GB (about once every 15 tapes). Adjust via the 3494 Operator Station Commands menu selection "Schedule Cleaning, in the "Usage clean" box. The Magstar Tape Guide redbook recommends setting the value to 999 to let the drive incite cleaning, rather than have the 3494 Library Manager initiate it (apparently to minimize drive wear). Ref: 3590 manual; "IBM Magstar Tape Products Family: A Practical Guide" redbook 3590 cleaning tape Color: Black shell, with gray end notches 3590 cleaning tape mounts, by drive, Put the 3494 into Pause mode; display Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Clean Mounts" value. 3590 compression of data The 3590 performs automatic compression of data written to the tape, increasing both the effective capacity of the 10 GB cartridge and boosting the effective write speed of the drive. The 3590's data compression algorithm is a Ziv-Lempel technique called IBMLZ1, more effective than the BAC algorithm used in the 3480 and 3490. Ref: Redbook "Magstar and IBM 3590 High Performance Tape Subsystem Technical Guide" (SG24-2506) See also: Compression algorithm, client 3590 Devclass, define 'DEFine DEVclass DevclassName DEVType=3590 LIBRary=LibName [FORMAT=DRIVE|3590B|3590C| 3590E-B|3590E-C] [MOUNTLimit=Ndrives] [MOUNTRetention=Nmins] [PREFIX=TapeVolserPrefix] [ESTCAPacity=X] [MOUNTWait=Nmins]' Note that "3590" is a special, reserved DEVType. 3590 drive* See: 3590 tape drive* 3590 EOV processing There is a volume status full for 3590 volumes. 3590 volumes will do EOV processing when the drive signals end of tape, or when the maxcapacity is reached, if maxcapacity has been set. When the drive signals end of tape, EOV processing will occur even if maxcapacity has not been reached. Contrast with 3490 EOV processing. 3590 errors See: MIM; SARS; SIM; VCR 3590 exploded diagram (internals) http://www.thic.org/pdf/Oct00/ imation.jgoins.001003.pdf page 20 3590 Fibre Channel interface There are two fibre channel interfaces on the 3590 drive, for attaching to up to 2 hosts. Supported in TSM 3.7.3.6 Available for 3590E & 3590H drives but not for 3590B. 3590 'J' (3590J) 3590 High Performance Cartridge Tape (HPCT), the original 3590 tape cartridge, containing 300 meters of half-inch tape. Predecessor: 3490 "E" Barcodette letter: 'J' Color of leader block and notch tabs: blue Compatible drives: 3590 B; 3590 E; 3590 H Capacity: 10 GB native on Model B drives (up to 30 GB with 3:1 compression); 20 GB native on Model E drives (up to 60 GB with 3:1 compression); 30 GB native on Model H drives (up to 90 GB with 3:1 compression); Cartridge weight: 240 grams (8.46 oz) Notes: Has the thickest tape of the 3590 tape family, so should be the most robust. Erasing: To erase a used tape and thus obliterate any existing data, in order to sell the tape or otherwise render the tape innocuous, use the 'tapeutil/ntutil' command Erase function, which takes about 15 minutes to erase this kind of tape. See also: 3590 cleaning tape; 3590 tape cartridge; 3590 'K'; EHPCT; HPCT 3590 'K' (3590 K; 3590K) 3590 Extended High Performance Cartridge Tape, aka "Extended length", "double length": 600 meters of thinner tape. Available: March 3, 2000 Predecessor: 3590 'J' Barcodette letter: 'K' Color of leader block and notch tabs: green Compatible drives: 3590 E; 3590 H Capacity: 40 GB native on 3590 E drives (up to 120 GB with 3:1 compression, depending upon the compressability of the data); 60 GB native on Model H drives (up to 120 GB with 3:1 compression); Cartridge weight: 250 grams (8.8 oz) Hardware Announcement: ZG02-0301 Life expectancy: "15-20 years" (per IBM FAQ FQ100665) Notes: The double length of the tape spool makes for longer average positioning times. Fragility: Because so much tape is packed into the cartridge, it tends to be rather close to the inside of the shell, and so is more readily damaged if the tape is dropped, as compared to the 3590 'J'. Erasing: To erase a used tape and thus obliterate any existing data, in order to sell the tape or otherwise render the tape innocuous, use the 'tapeutil/ntutil' command Erase function, which takes about 50 minutes to erase this kind of tape. 3590 media life In http://www.thic.org/pdf/Oct00/ imation.jgoins.001003.pdf, Imation says that for the Avanced Metal Particle 1 tape formulation used in 3590 that media life should be expected to be 15 - 30 years, with 5 - 10% magnetization loss after 15 years. The greater issue is how well the tapes were made in their batch, and how they were handled (tape is a contact medium) and the atmospheric conditions in which they were used (cartridges are open to the air). 3590 microcode ftp://index.storsys.ibm.com/3590/ code3590/index.html There are .fixlist and .fmrz files in the directory. 3590 microcode level Unix: 'tapeutil -f /dev/rmt_ vpd' (drive must not be busy) see "Revision Level" value AIX: 'lscfg -vl rmt_' see "Device Specific.(FW)" Windows: 'ntutil -t tape_ vpd' Microcode level shows up as "Revision Level". 3590 Model B11 Single-drive unit with attached 10-cartridge Automatic Cartridge Facility, intended to be rack-mounted (IBM 7202 rack). Can be used as a mini library. Interface is via integral SCSI-3 controller with two ports. As of late 1996 it is not possible to perform reclamation between 2 3590 B11s, because they are considered separate "libraries". Ref: "IBM TotalStorage Tape Device Drivers: Installation and User's Guide", Tape and Medium Changer Device Driver section. 3590 Model B1A Single-drive unit intended to be installed in a 3494 library. Interface is via integral SCSI-3 controller with two ports. 3590 Model E11 Rack-mounted 3590E drive with attached 10-cartridge ACF. 3590 Model E1A 3590E drive to be incorporated into a 3494. 3590 modes of operation (Referring to a 3590 drive, not in a 3494 library, with a tape magazine feeder on it.) Manual: The operator selects Start to load the next cartridge. Accumulate: Take each next cartridge from the Priority Cell, return to the magazine. Automatic: Load next tape from magazine without a host Load request. System: Wait for Load request from host before loading next tape from magazine. Random: Host treats magazine as a mini library of 10 cartridges and uses Medium Mover SCSI cmds to select and move tapes between cells. Library: For incorporation of 3590 in a tape library server machine (robot). 3590 performance See: 3590 speed 3590 SCSI device address Selectable from the 3590's mini-panel, under the SET ADDRESS selection, device address range 0-F. 3590 Sense Codes Refer to the "3590 Hardware Reference" manual. 3590 servo tracks Each IBM 3590 High Performance Tape Cartridge has three prerecorded servo tracks, recorded at time of manufacture. The servo tracks enable the IBM 3590 tape subsystem drive to position the read/write head accurately during the write operation. If the servo tracks are damaged, the tape cannot be written to. 3590 sharing between two TSM servers Whether by fibre or SCSI cabling, when sharing a 3590 drive between two TSM servers, watch out for SCSI resets during reboots of the servers. If the server code and hardware don't mesh exactly right, its possible to get a "mount point reserved" state, which requires a TSM restart to clear. 3590 speed Note from 1995 3590 announcement, number 195-106: "The actual throughput a customer may achieve is a function of many components, such as system processor, disk data rate, data block size, data compressibility, I/O attachments, and the system or application software used. Although the drive is capable of a 9-20MB/sec instantaneous data rate, other components of the system may limit the actual effective data rate. For example, an AS/400 Model F80 may save data with a 3590 drive at up to 5.7MB/sec. In a current RISC System/6000 environment, without filesystem striping, the disk, filesystem, and utilities will typically limit data rates to under 4MB/sec. However, for memory-to-tape or tape-to-tape applications, a RISC System/6000 may achieve data rates of up to 13MB/sec (9MB/sec uncompacted). With the 3590, the tape drive should no longer be the limiting component to achieving higher performance. See also IBM site Technote "D/T3590 Tape Drive Performance" 3590 statistics The 3590 tape drive tracks various usage statistics, which you can ask it to return to you, such as Drive Lifetime Mounts, Drive Lifetime Megabytes Written or Read, from the Log Page X'3D' (Subsystem Statistics), via discrete programming or with the 'tapeutil' command Log Sense Page operation, specifying page code 3d and a selected parameter number, like 40 for Drive Lifetime Mounts. Refer to the 3590 Hardware Reference manual for byte positions. See also: 3590 tape drive, hours powered on; 3590 tape mounts, by drive 3590 tape cartridge AKA "High Performance Cartridge Tape". See: 3590 'J' 3590 tape drive The IBM tape drive used in the 3494 tape robot, supporting 10Gbytes per cartridge uncompressed, or typically 30Gbytes compressed via IDRC. Uses High Performance Cartridge Tape. 3590 tape drive, hours powered on Put the 3494 into Pause mode; Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Pwr On Hrs" value. 3590 tape drive, release from host Unix: 'tapeutil -f dev/rmt? release' after having done a "reserve" Windows: 'ntutil -t tape_ release' 3590 tape drive, reserve from host Unix: 'tapeutil -f dev/rmt? reserve' Windows: 'ntutil -t tape_ reserve' When done, release the drive: Unix: 'tapeutil -f dev/rmt? release' Windows: 'ntutil -t tape_ release' 3590 tape drive Available? (AIX) 'lsdev -C -l rmt1' 3590 tape drive cleaning The drive may detect when it needs cleaning, at which point it will display its need on its front panel, and notify the library (if so attached via RS-422 interface) and the host system (AIX gets Error Log entry ERRID_TAPE_ERR6, "tape drive needs cleaning", or TAPE_DRIVE_CLEANING entry - there will be no corresponding Activity Log entry). The 3494 Library Manager would respond by adding a cleaning task to its Clean Queue, for when the drive is free. The 3494 may also be configured to perform cleaning on a scheduled basis, but be aware that this entails additional wear on the drive and makes the drive unavailable for some time, so choose this only if you find tapes going read-only due to I/O errors. Msgs: ANR8914I 3590 tape drive model number Do 'mtlib -l /dev/lmcp0 -D' The model number is in the third returned token. For example, in returned line: " 0, 00116050 003590B1A00" the model is 3590 B1A. 3590 tape drive serial number Do 'mtlib -l /dev/lmcp0 -D' The serial number is the second returned token, all but the last digit. For example, in returned line: " 0, 00116050 003590B1A00" the serial number is 11605. 3590 tape drive sharing As of TSM 3.7, two TSM servers to be connected to each port on a twin-tailed 3590 SCSI drive in the 3494, in a feature called "auto-sharing". Prior to this, individual drives in a 3494 library could only be attached to a particular server (library partitioning): each drive was owned by one server. 3590 tape drive status, from host 'mtlib -l /dev/lmcp0 -qD -f /dev/rmt1' 3590 tape drives, list From AIX: 'mtlib -l /dev/lmcp0 -D' 3590 tape drives, list in AIX 'lsdev -C -c tape -H -t 3590' 3590 tape drives, not being used in a See: Drives, not all in library being library used 3590 tape mounts, by drive Put the 3494 into Pause mode; Open the 3494 door to access the given 3590's control panel; Select "Show Statistics Menu"; See "Mounts to Drv" value. See also: 3590 tape drive, hours powered on; 3590 statistics 3590 volume, veryify Devclass See: SHow FORMAT3590 _VolName_ 3590B The original 3590 tape drives. Cartridges supported: 3590 'J' (10-30 GB), 'K' (20-60 GB) (Early B drives can use only 'J'.) Tracks: 128 total tracks, 16 at a time, in serpentine fashion. Number of servo tracks: 3 Interfaces: Two, SCSI (FWD) Previous generation: none in 3590 series; but 3490E conceptually. See also: 3590C 3590B vs. 3590E drives A tape labelled by a 3590E drive cannot be read by a 3590B drive. A tape labelled by a 3590B drive can be read by a 3590E drive, but cannot be written by a 3590E drive. The E model can read the B formatted cartridge. The E model writes in 256 track format only and can not write or append to a B formatted tape. The E model can reformat a B format tape and then can write in the E format. The B model can not read E formatted data. The B model can reformat an E format tape and then can write in the B format: the B model device must be a minimum device code level (A_39F or B_731) to do so. 3590C FORMAT value in DEFine DEVclass for the original 3590 tape drives, when data compression is to be performed by the tape drive. See also: 3590C; DRIVE 3590E IBM's fifth generation of this 1/2" tape cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Cartridges supported: 3590 'J' (20-60 GB), 'K' (40-120 GB) Tracks: 256 (2x the 3590B), written 16 at a time, in serpentine fashion. The head contains 32 track writers: As the tape moves forward, 16 tracks are written until EOT is encountered, whereupon electronic switching causes the other 16 track writers in the heads to be used as the tape moved backwards towards BOT. Then, the head is physically moved (indexed) to repeat the process, until finally all 256 tracks are written as 16 interleaved sets of 16 tracks. Number of servo tracks: 3 Interfaces: Two, SCSI (FWD) or FC As of March, 2000 comes with support for 3590 Extended High Performance Cartridge Tape, to again double capacity. Mixing of 3590B and 3590E drives in a single 3494 is outlined in the TSM 4.1 server README file. Devclass: FORMAT=3590E-C (not DRIVE) Previous generation: 3590B Next generation: 3590K 3590E? (Is a drive 3590E?) Expect to be able to tell if a 3590 drive is an E model by visual inspection: - Rear of drive (power cord end) having stickers saying "Magstar Model E" and "2x" (meaning that the EHPC feature is installed in the drive). - Drive display showing like "E1A-X" (drive type, where X indicates extended) in the lower leftcorner. (See Table 5 in 3590 Operator Guide manual.) 3590EE Extra long 3590E tapes (double length), available only from Imation starting early 2000. The cartridge accent color is green instead of blue and have a K label instead of J. Must be used with 3590E drives. 3590H IBM's sixth generation of this 1/2" cartridge technology, using a single-reel approach and servo tracking pre-recorded on the tape for precise positioning. Excellent start-stop performance. Cartridges supported: 3590 'J' (30-90 GB), 'K' (60-180 GB) Capacity: 30GB native, ~90 GB compressed Tracks: 384 (1.5 times the 3590E) Compatibility: Can read, but not write, 128-track (3590) and 256-track(3590E) tapes. Supported in: TSM 5.1.1.0 Interfaces: Two, SCSI (FWD) or FC Devclass: FORMAT=3590E-C (not DRIVE) Previous generation: 3590E Next generation: 3592 (which is a complete departure, wholly incompatible) IBM Technotes: 1166965 3590K See: 3590 'K' 3590L AIX ODM type for 3590 Library models. 3592 (3592-J1A) The IBM TotalStorage Enterprise Tape Drive and Cartridge model numbers, introduced toward the end of 2003. The drive is only a drive: it slides into a cradle which externally provides power to the drive. The small form factor more severely limits the size of panel messages, to 8 chars. This model is a technology leap, akin to 3490->3590, meaning that though cartridge form remains the same, there is no compatibility whatever between this and what came before. Cleaning cartridges for the 3592 drive are likewise different. Rather than having a leader block, as in 3590 cartridges, the 3592 has a leader pin, located behind a retractable door. The 3592 cartridge is IBM's first one in the 359x series with an embedded, 4 KB memory chip (Cartridge Memory): Records are written to the chip every time the cartridge is unloaded from a 3592 J1A tape drive. Data is read and written to the CM via short range radio frequency communication and includes volser, the media in the cartridge, the data on the media, and tape errors (which allow the drive to learn that the cartridge media is "degraded"). These records are then used by the IBM Statistical Analysis and Reporting System (SARS) to analyze and report on tape drive and cartridge usage and help diagnose and isolate tape errors. SARS can also be used to proactively determine if the tape media or tape drive is degrading over time. Cleaning tapes also have CM, emphatically limiting their usage to 50 cycles. Currently, only the tape drive has the means to interact with the CM: in the future, the robotic picker might have that capability. The 3592 cartridges come in four types: - The 3592 "JA" (Data) long rewritable cartridge: the high capacity tape which most customers would probably buy. Native capacity: 300 GB (Customers report getting up to 1.2 TB; but you might get only 244 GB on a tape.) Can be initialized to 60 GB to serve in a fast-access manner. Works with 3592 J1A tape drive. Colors: Black case, dark blue accents. Cartridge weight: 239 g (8.4 oz) - The 3592 "JJ" (Economy) short rewritable cartridge: the economical choice where lesser amounts of data is written to separate tapes. Native capacity: 300 GB. Works with 3592 J1A tape drive. Colors: Black case, light blue accents. - The 3592 "JW" (WORM) long write-once, read-many (WORM) cartridge. Native capacity: 300 GB. Colors: Platinum case, dark blue accents. - The 3592 "JR" (Economy WORM) short write-once, read-many (WORM) cartridge. Native capacity: 60 GB. Colors: Platinum case, light blue accents. - The 3592 cleaning tape, "JA" + "CLN" Colors: Black case, gray accents. Compression type: Byte Level Compression Scheme Swapping. With this type, it is not possible for the data to expand. (IBM docs also say that the drive uses LZ1 compression, and Streaming Lossless Data Compression (SLDC) data compression algorithm, and ELDC.) The TSM SCALECAPACITY operand of DEFine DEVClass can scale native capacity back from 100 GB down to a low of 60 GB. The 3592 cartridges may live in either a 3494 library (in a new frame type - L22, D22, and D24 - separate from any other 3590 tape drives in the library); or a special frame of a 3584 library. Host connectivity: Dual ported switched fabric 2-Gbps Fibre Channel attachment (but online to only one host at a time). Physical connection is FC, but the drive employs the SCSI-3 command set for operation, in a manner greatly compatible with the 3590, simplifying host application support of the drive. As with the 3590 tape generation, the 3592 has servo information factory-written on the tape. (Do not degauss such cartridges. If you need to obliterate the data on a cartridge, perform a Data Security Erase.) Drive data transfer rate: up to 40MB/s (2.5 times the speed of 3590E or H) Cartridge life: Specified by IBM as 300 full file passes, and 20,000 load and unload cycles. Data life: 30 years, with less than 5% loss in demagnetization when the cartridge is stored at a temperature of 16 C to 25 C, 20% to 50% non-condensing humidity, and wet bulb temperature of 26 C maximum. Barcode label: Consists of 8 chars, the first 6 being the tape volser, and the last 2 being media type ("JA"). Tape vendors: Fuji, Imation (IBM will not be manufacturing tape) The J1A version of the drive is supported in the 3584 library, as of mid 2004. There is a CE Service Panel installed in each drives bay area of a tape library containing 3592 drives, which can be connected to a plug located at the extreme bottom of the rear of the drive. The CE can thereby perform drive resets and other operations without having to go inside the library robotics area. IBM brochure, specs: G225-6987-01 http://www.fuji-magnetics.com/en/company /news/index2_html Next generation: 3592-E05 (TS1120) 3592-E05 3592 generation 2, aka TS1120. Uses the same media as generation 1 drives, but gen 2 drives can write in a different format, which gen 1 drives cannot read. This format has a larger capacity than gen 1: an uncompressed gen 2 volume can hold about 500 GB of data. Data rate: 260 MB/s Tape speed: 6.21 m/sec Load/ready time: 13 sec Rewind time same as Gen 1: 9 seconds The same ratios for compression, short tape capacity, and so on, that apply to gen 1 also apply to gen 2. Because of the differences, special accommodations must be made when mixing drives in a library (e.g., 3494), which is to say that a separate logical library definition is needed for these gen 2 drives and their utilized tapes, in the same manner as a physical library with 3590 and 3592 gen 1 drives. This is also to say that 3592 gen 1 and 2 must have separate scratch tape pools. Supported as of TSM 5.3.3. See the server Readme document. This drive introduces a new capability to IBM tape devices: encryption. See IBM document 7008595 for guidance. TSM devclass Format: 3592-2 Next generation: 3592-E06 Redbook: "IBM System Storage TS1120 Tape Encryption Planning, Implementation, and Usage Guide" (SG24-7320) 3592-E05 and encryption When using encryption, a new format will be used to write encrypted data to tapes. Encryption choices: Application: Encryption keys are managed by the application (TSM), where TSM stores the keys in its database. This method is only for storage pools - cannot be used with DB backups or Exports. Set DRIVEENCRYPTION to ON. Library: Encryption keys are managed by the library. Keys are stored in an encryption key manager and provided to the drive transparent to TSM. If the hardware is set up to use Library Encryption, Tivoli Storage Manager can allow this method to be utilized by setting the DRIVEENCRYPTION parameter to ALLOW. System: Encryption keys are managed by the device driver or operating system and stored in an encryption key manager. Set DRIVEENCRYPTION to ALLOW. Scratch tapes: If volumes are written to using the new format and then returned to scratch, they will contain labels that are only readable by encryption enabled drives. To use these scratch volumes in a drive that is not enabled for encryption, either because the hardware is not capable of encryption or because the encryption method is set to NONE, you must relabel them. Refer to the Admin Guide manual and IBM document 7008595. 3592-E06 3592 generation 3, aka TS1130. Native capacity: 1 TB (using JB/JX media), 640 GB (using JA/JW media) or 128 GB (using JJ/JR media) Won't write to a tape which has been partially written in E05 format. TSM devclass Format: 3592-3 (and if you also have 3592-E05 drives, define a separate TSM library for each) 3592 barcode volume name length May be 6 or 8 characters. See IBM Technote 1217789. 3592 media types IBM Redbooks Technote TIPS0419 3592 microcode IBM source: ftp://index.storsys.ibm.com/3592/ index.html There are .fixlist and .fmr files in the directory. Alternately, your CE will have a CD-ROM. The CD-ROM microcode can be transferred to the drives from the 3494 Library Manager industrial computer (slow, over RS-422 serial connection), or from the host (fast, over SCSI/FC). Note that the drive does not have to be offline to TSM or the host for the microcode to be transferred to it: the drive has staging capability, where it can receive and hold the microcode update, pending commitment. See also: 3494 tape drive microcode 3592 path To see what pathing a Fibre Channel drive is using, make use of: apeutil -f /dev/rmt_ path where the next to last line of output reports the "current" path. 3593 An unfortunately numbered IBM product which makes it seem like the next generation from the 3592 tape drive - but it's not: its a kind of adapter (frame and library manager) which allows a System z to connect to a TS3500 tape library. 3599 An IBM "machine type / model" spec for ordering any Magstar cartridges: 3599-001, -002, -003 are 3590 J cartridges; 3599-004, -005, -006 are 3590 K cartridges; 3599-007 is 3590 cleaning cartridge; 3599-011, -012, -013 are 3592 cartridges 3599-017 is 3592 cleaning cartridge. 3599 A product from Bow Industries for cleaning and retensioning 3590 tape cartridges. www.bowindustries.com/3599.htm 3600 IBM LTO tape library, announced 2001/03/22, withdrawn 2002/10/29. Models: 3600-109 1.8 TB autoloader 3600-220 2/4 TB tower; 1 or 2 drives 3600-R20 2/4 TB rack; 1 or 2 drives The 220 and R20 come with two removable magazines that can each hold up to 10 LTO data or cleaning cartridges. 3607 A mini DLT library from IBM. The 1x16 version contains a single SDLT drive and two, 8-cartridge magazines from which a picker can access tapes to load them into the drive. Control is over a SCSI connection (is a "SCSI library"). There is also ethernet for administrator access to a built-in Web interface. 3995 IBM optical media library, utilizing double-sided, CD-sized optical platters contained in protective plastic cartridges. The media can be rewritable (Magneto-Optical), CCW (Continuous Composite Write-once), or permanent WORM (Write-Once, Read-Many). Each side of a cartridge is an Optical Volume. The optical drive has a fixed, single head: the autochanger can flip the cartridge to make the other side (volume) face the head. See also: WORM 3995 C60 Make sure Device Type ends up as WORM, not OPTICAL. 3995 drives Define as /dev/rop_ (not /dev/op_). See APAR IX79416, which describes element numbers vs. SCSI IDs. 3995 manuals http://www.storage.ibm.com/hardsoft/ opticalstor/pubs/pubs3995.html 3995 web page http://www.storage.ibm.com/hardsoft/ opticalstor/3995/maine.html http://www.s390.ibm.com/os390/bkserv/hw/ 50_srch.html 4560SLX IBM $6500 Modular Tape Library Base: a tiny library which can accommodate one or two LTO or SDLT tape drives and can support up to 26 SDLT tape cartridges or up to 30 LTO tape cartridges. This modular, high-density automated tape enclosure is available in rack version only. Each 5U unit contains a power supply and electronics logic. Two rows of tape storage cells occupy the left and right sides of the cabinet, with a picker mechanism running down the center aisle, feeding two drives at the far end of the aisle. 52A Software Version identifier, seen in APARs, referring to TSM 5.2 for AIX. 52M Software Version identifier, seen in APARs, referring to TSM 5.2 for the Macintosh client. This identifier can be used to present all APARs for the Mac 5.2 client, by going to the TSM Support Page and searching on "52M". 52S Software Version identifier, seen in APARs, referring to TSM 5.2 for the server. 52W Software Version identifier, seen in APARs, referring to TSM 5.2 for the Windows client. 53D Software Version identifier, seen in APARs, referring to TSM 5.3 for the Windows 2003 client. 53H Software Version identifier, seen in APARs, referring to TSM 5.3 for the HP client. 53L Software Version identifier, seen in APARs, referring to TSM 5.3 for the Linux 86 client. 53N Software Version identifier, seen in APARs, referring to TSM 5.3 for the NetWare client. 53O Software Version identifier, seen in APARs, referring to TSM 5.3 for the Windows x64 client. 53P Software Version identifier, seen in APARs, referring to TSM 5.3 for the Linux POWER (architecture) client. 53S Software Version identifier, seen in APARs, referring to TSM 5.3 for the Solaris client. 53W Software Version identifier, seen in APARs, referring to TSM 5.3 for the Windows client. 53X Software Version identifier, seen in APARs, referring to TSM 5.3 for the Linux zSeries client. 53Z Software Version identifier, seen in APARs, referring to TSM 5.3 for the zOS client. 56Kb modem uploads With 56Kb modem technology, 53Kb is the fastest download speed you can usually expect, and 33Kb is the highest upload speed possible. And remember that phone line quality can reduce that further. Ref: www.56k.com 64-bit executable in AIX? To discern whether an AIX command or object module is 64-bit, rather than 32-bit, use the 'file' command on it. (This command references "signature" indicators listed in /etc/magic.) If 64-bit, the command will report like: 64-bit XCOFF executable or object module not stripped See also: 32-bit executable in AIX? 64-bit filesize support Was added in PTF 6 of the version 2 client. 64-bit ready? (Is ADSM?) Per Dave Cannon, ADSM Development, 1998/04/17, the ADSM server has always used 64-bit values for handling sizes and capacities. 726 Tape Unit IBM's first tape drive model, announced in 1952. A 10.5-inch-diameter reel of tape could hold the equivalent of more than 35,000 punched cards. This afforded data storage capability and speed hitherto only dreamed of. Density: 100 dpi, on 1/2" tape, 7500 characters per second. 7206 IBM model number for 4mm tape drive. Media capacity: 4 GB Transfer rate: 400 KB/S 7207 IBM model number for QIC tape drive. Media capacity: 1.2 GB Transfer rate: 300 KB/S 7208 IBM model number for 8mm tape drive. Media capacity: 5 GB Transfer rate: 500 KB/S 7331 IBM model number for a tape library containing 8mm tapes. It comes with a driver (Atape on AIX, IBMTape on Solaris) for the robot to go with the generic OST driver for the drive. That's to support non-ADSM applications, but ADSM has its own driver for these devices. Media capacity: 7 GB Transfer rate: 500 KB/S 7332 IBM model number for 4mm tape drive. Media capacity: 4 GB Transfer rate: 400 KB/S 7337 A DLT library. Define in ADSM like: DEFine LIBRary autoDLTlib LIBType=SCSI DEVice=/dev/lb0 DEFine DRive autodltlib drive01 DEVice=/dev/mt0 ELEMent=116 DEFine DRive autodltlib drive02 DEVice=/dev/mt1 ELEMent=117 DEFine DEVclass autodlt_class DEVType=dlt LIBRary=autodltlib DEFine STGpool autodlt_pool autodlt_class MAXSCRatch=15 8200 Refers to recording format for 8mm tapes, for a capacity of about 2.3 GB. 8200C Refers to recording format for 8mm tapes, for a capacity of about 3.5 GB. 8500 Refers to recording format for 8mm tapes, for a capacity of about 5.0 GB. 8500C Refers to recording format for 8mm tapes, for a capacity of about 7.0 GB. 8900 Refers to recording format for 8mm tapes, for a capacity of about 20.0 GB. 8mm drives All are made by Exabyte. 8mm tape technology Yecch! Horribly unreliable. Tends to be "write only" - write okay, but tapes unreadable thereafter. 9710/9714 See: StorageTek 9840 See: STK 9840 9940b drive Devclass: - If employing the Gresham Advantape driver: generictape - If employing the Tivoli driver: ecartridge Abandoned filespaces See: File Spaces, abandoned ABC Archive Backup Client for *SM, as on OpenVMS. The software is written by SSSI. It uses the TSM API to save and restore files. See also: OpenVMS ABSolute A Copy Group mode value (MODE=ABSolute) that indicates that an object is considered for backup even if it has not changed since the last time it was backed up; that is, force all files to be backed up. See also: MODE Contrast with: MODified. See also: SERialization (another Copy Group parameter) AC Administration Center. Accelis (LTO) Designer name for the next generation of (sometimes misspelled "Accellis") IBM's mid-range tape technology, circa 1999, following the 3570: LTO. Cartridge is same as 3570, including dual-hub, half-wound for rapid initial access to data residing at either end of the tape (intended to be 10 seconds or less), with 8mm tape. Physically sturdier than Ultrium, Accelis was intended for large-scale automated libraries. Its head would have 8 write and read and verify elements, and a pair of servo heads. The servo functions are the same as in Ultrium, but instead of six servo tracks, 9 servo tracks (0-8). Accelis tapes would have only two data bands, compared with the four bands on Ultrium. The two bands are separated into four quadrants: the first data band is separated into quadrants 0 (to the left of the head) and 1 (to the right), and the second data band is separated into quadrants 2 (to the left) and 3 (to the right). As a write is being performed, the head writes to quadrant 0. When that is filled, the head moves to the next quadrant (1); and so on until all the quadrants are filled. This method makes for less area to search for data, resulting in faster access times. Accelis would perform verification just as Ultrium, with direction buffers to prevent cross-track magnetic interference. But Accelis never made it to reality: increasing disk capacity made the higher-capacity Ultrium more realistic; and two-hub tape cartridges are wasteful in containing "50% air" instead of tape. Accelis would have had: Cartridge Memory (LTO CM, LTO-CM) chip is embedded in the cartridge: a non-contacting RF module, with non-volatile memory capacity of 4096 bytes, provides for storage and retrieval of cartridge, data positioning, and user specified info. Recording method: Multi-channel linear serpentine Capacity: 25 GB native, uncompressed, on 216 m of tape; 50 GB compressed. Transfer rate: 10-20 MB/second. http://www.Accelis.com/ "What Happened to Accelis?": http://www.enterprisestorageforum.com/ technology/features/article.php/1461291 See also: 3583; LTO; MAM; Ultrium (LTO) ACCept Date TSM server command to cause the server to accept the current date and time as valid when an invalid date and time are detected. Syntax: 'ACCept Date' Note that one should not normally have to do this, even across Daylight Savings Time changes, as the conventions under which application programs are run on the server system should let the server automatically have the correct date and time. In Unix systems, for example, the TZ (Time Zone) environment variable specifies the time zone offsets for Daylight and Standard times. In AIX you can do 'ps eww ' to inspect the env vars of the running server. In a z/OS environment, see IBM Technote 21153685. See also: Clock; Daylight Savings Time Access Line-item title from the 'Query Volume Format=Detailed' report, which says how the volume may be accessed: Read-Only, Read/Write, Unavailable, Destroyed, OFfsite. Use 'UPDate Volume' to change the access value. If Access is Read-Only for a storage pool within a hierarchy of storage pools, ADSM will skip that level and attempt to write the data to the next level. Access TSM db: Column in Volumes table. Possible values: DESTROYED, OFFSITE, READONLY, READWRITE, UNAVAILABLE Access Control Lists (AIX) Extended permissions which are preserved in Backup/Restore. "Access denied" A message which may be seen in some environments; usually means that some other program has the file open in a manner that prevents other applications from opening it (including TSM). Access mode A storage pool and storage volume attribute recorded in the TSM database specifying whether data can be written to or read from storage pools or storage volumes. It can be one of: Read/write Can read or write volume in the storage pool. Set with UPDate STGpool or UPDate Volume. Read-only Volume can only be read. Set with UPDate STGpool or UPDate Volume. Unavailable Volume is not available for any kind of access. Set with UPDate STGpool or UPDate Volume. DEStroyed Possible for a primary storage pool (only), says that the volume has been permanently damaged. Do RESTORE STGpool or RESTORE Volume. Set with UPDate Volume. OFfsite Possible for a copy storage pool, says that volume is away and can't be mounted. Set with UPDate Volume. Ref: Admin Guide See also: DEStroyed Access time When a file was last read: its "atime" value (stat struct st_atime). The Backup operation results in the file's access timestamp being changed as each file is backed up, because as a generalized application it is performing conventional I/O to read the contents of the file, and the operating system records this access. (That is, it is not Backup itself which modifies the timestamp: it's merely that its actions incidentally cause it to change.) Beginning with the Version 2 Release 1 Level 0.1 PTF, UNIX backup and archive processes changed the ctime instead of user access time (atime). This was done because the HSM feature on AIX uses atime in assessing a file's eligibility and priority for migration. However, since the change of ctime conflicts with other existing software, with this Level 0.2 PTF, UNIX backup and archive functions now perform as they did with Version 1: atime is updated, but not ctime. AIX customers might consider geting around that by the rather painful step of using the 'cplv' command to make a copy of the file system logical volumes, then 'fsck' and 'mount' the copy and run backup; but that isn't very reliable. One thinks of maybe getting around the problem by remounting a mounted file system read-only; but in AIX that doesn't work, as lower level mechanisms know that the singular file has been touched. (See topic "MOUNTING FILE SYSTEMS READ-ONLY FOR BACKUP" near the bottom of this documentation.) Network Appliance devices can make an instant snapshot image of a file system for convenient backup, a la AFS design. Veritas Netbackup can restore the atime but at the expense of the ctime (http://seer.support.veritas.com/docs/ 240723.htm) See also: ctime; FlashCopy; mtime Accessor On a tape robot (e.g., 3494) is the part which moves within the library and carries the arm/hand assembly. See also: Gripper Accounting Records client session activities, with an accounting record written at the end of each client node session (in which a server interaction is required). The information recorded chiefly reflects volumetrics, and thus would be more useful for cross-charging purposes than for more illuminating uses. Note that a client session which does not require interaction with the server, such as 'q option', does not result in an accounting record being written. A busy system will create VOLUMINOUS accounting files, so use judiciously; but despite the volume, there is no perceptible performance impact on the server from activating accounting. Customers report that NAS backup statistics are not recorded in the accounting log. See also: dsmaccnt.log; SUMMARY Accounting, query 'Query STatus', seek "Accounting:". Unfortunately, its output is meager, revealing only On or Off. See also: dsmaccnt.log Accounting, turn off 'Set ACCounting OFf' Accounting, turn on 'Set ACCounting ON' See also: dsmaccnt.log Accounting log Unix: Is file dsmaccnt.log, located in the server directory were no overriding environment variables are in effect, or the directory specified by the DSMSERV_DIR environment variable, or the directory specified on the DSMSERV_ACCOUNTING_DIR environment variable. Accounting data appears solely in this file: no TSM database space is used. The accounting log is more comprehensive than either the Summary Table or ANE records in the Activity Log because the accounting log is written by the server for all client activity, whereas clients which employ the TSM API (including the TDPs) cannot, because the API lacks statistics transfer capability. MVS (OS/390): the recording occurs in SMF records, subtype 14. Accounting recording begins when 'Set ACCounting ON' is done and client activity occurs. The server keeps the file open, and the file will grow endlessly: there is no expiration pruning done by TSM; so you should cut the file off periodically, either when the server starts/ends, or by turning accounting off for the curation of the cut-off. There is no documented support for multiple, co-resident TSM servers to share the same accounting log; thus, you would risk collision and data loss in attempting to do so. Mingling would be problematic in that there is no record field identifying which server wrote a log entry. See also: dsmaccnt.log Accounting log directory Specified via environment variable DSMSERV_ACCOUNTING_DIR (q.v.) in Unix environments, or Windows Registry key. If that's not specified, then the directory will be that specified by the DSMSERV_DIR environment variable; and if that is not specified, then it will be the directory wherein the TSM server was started. Introduced late in *SMv3. Accounting record layout/fields See the Admin Guide for a description of record contents. Field 24, "Amount of media wait time during the session", refers to time waiting for tape mounts. Note that maintenance levels may add accounting fields. See layout description in "ACCOUNTING RECORD FORMAT" near the bottom of this functional directory. Accounting records processing There are no formal tools for doing this. The IBM FTP site's adsm/nosuppt directory contains an adsmacct.exec REXX script, but that's it. See http://people.bu.edu/rbs/TSM_Aids.html for a Perl program to do this. ACF 3590 tape drive: Automatic Cartridge Facility: a magazine which can hold 10 cartridges. Note that this does not exist as such on the 3494: it has a 10-cartridge Convenience I/O Station, which is little more than a pass-through area. ACL handling (Access Control Lists) ACL info will be stored in the *SM database by Archive and Backup, unless it is too big, in which case the ACL info will be stored in a storage pool, which can be controlled by DIRMc. Ref: Using the Unix Backup-Archive Clients (indexed under Access Permissions, describing ACLs as "extended permissions"). See also: Archive; Backup; DIRMc; INCRBYDate; SKIPACL; SKIPACLUPdatecheck ACLs (Access Control Lists) and Changes to Unix ACLs do not change the mtime affecting backup file mtime, so such a change will not cause the file to be backed up by date. ACLS Typically a misspelling of "ACSLS", but could be Auto Cartridge Loader System. ACS Automated Cartridge System ACSACCESSID Server option to specify the id for the ACS access control. Syntax: ACSACCESSID name Code a name 1-64 characters long. The default id is hostname. ACSDRVID Device Driver ID for ACSLS. ACSLOCKDRIVE Server option to specify if the drives within the ACSLS libraries to be locked. Drive locking ensures the exclusive use of the drive within the ACSLS library in a shared environment. However, there are some performance improvements if locking is not performed. If the ADSM drives are not shared with other applications in the configuration then drive locking are not required. Syntax: ACSLOCKDRIVE [YES | NO] Default: NO ACSLS Refers to the STK Automated Cartridge System Library Software. Based upon an RPC client (SSI) - server (CSI) model, it manages the physical aspects of tape cartridge storage and retrieval, while data retrieval is separate, over SCSI or other method. Whenever TSM has a command to send to the robot arm, it changes the command into something that works rather like an RPC call that goes over to the ACSLS software, then ACSLS issues the SCSI commands to the robot arm. ACSLS is typically needed only when sharing a library, wherein ACSLS arbitrates requests; otherwise TSM may control the library directly. Performance: As of 2000/06, severely impaired by being single-threaded, resulting in long tape mount times as *SM queries the drive several times before being sure that a mount is safe. http://www.stortek.com/StorageTek/ software/acsls/ Issues: Adds a layer of software between TSM and the library, and an opportunity for the two to get out of sync - making for more complex problems. Can also result in timing problems. Debugging: Use 'rpcinfo -p' on the server to look for the following ACSLS programs being registered in Portmap: program vers proto port 536871166 2 tcp 4354 300031 2 tcp 4355 then use 'rpcinfo -t ...' to reflect off the program instances. Server options: ACSACCESSID; ACSLOCKDRIVE; ACSQUICKINIT; ACSTIMEOUTX IBM Technotes: 1144928 ACSQUICKINIT Server option to specify if the initialization of the ACSLS library should be quick or full initialization during the server startup. The full initialization matches the ACSLS inventory with the ADSM inventory and validate the locking for each ADSM owned volume. It also validates the drive locking and dismount all volumes currently in the ADSM drive. The full initialization takes about 1-2 seconds per volume and can take a long time during the server startup if the library inventory is large. ACSQUICKINIT bypasses all the inventory matching, lock validation and volume dismounting from the drive. The user must ensure the integrity of the ADSM inventory and drive availability, all ADSM volumes or drives are assumed locked by the same lock_id and available. This option is useful for server restart, and should only be used if all ADSM inventory and resources remain the same while the server is down. Syntax: ACSQUICKINIT [YES | NO] Default: NO ACSTIMEOUTX Server option to specify the multiple for the build-in timeout value for ACSLS API. The build-in timeout value for ACS audit API is 1800 seconds, for all other APIs are 600 seconds. If the multiple value specifed is 5, the timeout value for audit API becomes 9000 seconds and all other APIs becomes 3000 seconds. Syntax: ACSTIMEOUTX value Code a number from 1 - 100. Default: 1 Activate Policy Set See: ACTivate POlicyset; Policy set, activate ACTivate POlicyset *SM server command to specify an existing policy set as the Active policy set for a policy domain. Syntax: 'ACTivate POlicyset ' (Be sure to do 'VALidate POlicyset' beforehand.) You need to do an Activate after making management class changes. ACTIVE Column name in the ADMIN_SCHEDULES SQL database table. Possible values: YES, NO. SELECT * FROM ADMIN_SCHEDULES Active data storage pool TSM 5.4 introduced the Active-Data Pool (ADP) facility. Its purpose is to speed restorals of current data, eliminating the time waste of plowing through all the Inactive versions of files in order to get to the Active versions - a particular concern when the media is tape. The device class must be a sequential access type, where FILE is the most logical choice, but tape could be used. Collocation can be employed, as with other sequential pools. An Active-Data Pool is a *copy* of the Active data in a primary storage pool, not a separated area where Active versions of data would live separate from Inactive versions. Data gets into the ADP either during client backup (the most efficient means), per the ACTIVEDATAPool spec on the primary pool; or after the fact via the COPY ACTIVEdata command, to run through the morass of the primary storage pool to copy the Active versions of files from the mix of Active and Inactive versions. Note that the BAckup STGpool command is not supported for active-data pools. Active Directory See: Windows Active Directory Active file system A file system for which space management is activated. HSM can perform all space management tasks for an active file system, including automatic migration, recall, and reconciliation and selective migration and recall. Contrast with inactive file system. Active files, identify in Select Where allowed: STATE='ACTIVE_VERSION' See also: Inactive files, identify in Select; STATE Active files, number and bytes Do 'EXPort Node NodeName \ FILESpace=FileSpaceName \ FILEData=BACKUPActive \ Preview=Yes' Message ANR0986I will report the number of files and bytes. But, this is ploddingly slow. An alternate method, reporting MB only, follows the definition of Active files, meaning files remaining in the file system - as reflected in a Unix 'df' command and: SELECT SUM(CAPACITY*PCT_UTIL/100) FROM FILESPACES WHERE NODE_NAME='____' (Omit the Where to see the sum for all nodes.) This Select is very fast and obviously depends upon whole file system backups. (Selective backups and limited backups can throw it off.) Or: In Unix, you could instead approximate the number of Active files via the Unix command 'df -i' to get the number of in-use inodes, where most of the number would be files, and a minority being directories, which you could approximate. See also: Inactive files, number and bytes; Estimate Active files, report in terms of MB By definition, Active files are those which are currently present in the client file system, which a current backup causes to be reflected in filespace numbers, so the following yields reasonable results: SELECT NODE_NAME, FILESPACE_NAME, FILESPACE_TYPE, CAPACITY AS "File System Size in MB", PCT_UTIL, DECIMAL((CAPACITY * (PCT_UTIL / 100.0)), 10, 2) AS "MB of Active Files" FROM FILESPACES ORDER BY NODE_NAME, FILESPACE_NAME Caveats: The amount of data in a TSM server filespace will differ somewhat from the client file system where some files are excluded from backups, and more so where client compression is employed. But in most cases the numbers will be good. Active files, separate storage pool Possible via the Active-data Pool (ADP) facility in TSM 5.4. Active files for a user, identify via SELECT COUNT(*) AS "Active files count"- Select FROM BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND OWNER='___' - AND STATE='ACTIVE_VERSION' Active log (TSM 6) If it is full, a new log file is used. When a log file is no longer active (all sql statements are committed), it is archived. Active policy set The policy set within a policy domain most recently subjected to an 'activate' to effectively establish its specificaitons as those to be in effect. This policy set is used by all client nodes assigned to the current policy domain. See policy set. Active Version (Active File) The most recent backup copy of an object stored in ADSM storage for an object that currently exists on a file server or workstation. An active version remains active and exempt from deletion until it is replaced by a new backup version, or ADSM detects during a backup that the user has deleted the original object from a file server or workstation. Note that active and inactive files may exist on the same volumes. See also: ACTIVE_VERSION; Inactive Version; INACTIVE_VERSION Active versions, keep in stgpool For faster restoral, you may want to retain Active files in a higher storage pool of your storage pool hierarchy. There has been no operand in the product to allow you to specify this explicitly - because it is architecturally prohibitive to achieve, given that the unit of data movement within storage pools is an Aggregate, not a client file. But you can roughly achieve that end via the Stgpool MIGDelay value, to keep recent (Active) files in the higher storage pool. Of course, if there is little turnover in the file system feeding the storage pool, Active files will get old and will migrate. Instead of all that, look into using the new feature Active-data Pool. Active-data Pool New in TSM 5.4. Allows Active files to be kept in their own storage pool. While this obviously facilitates the restoral of Active files, it has ramifications for other types of restorals... The Admin Guide notes: "The server will not attempt to retrieve client files from an active-data pool during a point-in-time restore." ACTIVE_VERSION SQL DB: State value in Backups table for a current, Active file. See also: DEACTIVATE_DATE Activity log Contains all messages normally sent to the server console during server operation. This is information stored in the TSM server database, not in a separate file. (The internalization of the logging is a Bad Idea in leaving you nothing to refer to when the TSM server is not responding and you need to find out what's wrong; and there is no OS level utility provided with TSM to see the log without the TSM server being functional.) Do 'Query ACtlog' to get info. Each time the server starts it begins logging with message: ANR2100I Activity log process has started. See also: Activity log pruning Activity log, create an entry As of TSM 3.7.3 you can, from the client side, cause messages to be added to the server Activity Log (ANE4771I) by using the API's dsmLogEvent. Another means, crude but effective: use an unrecognized command name, like: "COMMENT At this time we will be powering off our tape robot." It will show up on an ANR2017I message, followed by "ANR2000E Unknown command - COMMENT.", which can be ignored. See also: ISSUE MESSAGE Activity log, number of entries There is no server command to readily determine the amount of database space consumed by the Activity Log. The only close way is to count the number of log entries, as via batch command: 'dsmadmc -id=___ -pa=___ q act BEGINDate=-9999 | grep ANR | wc -l' or do: SELECT COUNT(*) FROM ACTLOG See also: Activity log pruning Activity log, search 'Query ACtlog ... Search='Search string' Activity log, search for bkup/restores Use the extended form of 'Query ACtlog' to perform a qualified search, and thus get back results limited by subject nodename and message type. We know that client backup/restore operations log summary statistics via ANE* messages so we can search like: Query ACtlog BEGINTime=-3 \ ORiginator=CLient NODEname=Somenode \ Search='ANE' Activity log, Select entries less than SELECT * FROM ACTLOG WHERE - an hour old (CAST((CURRENT_TIMESTAMP - DATE_TIME) \ HOURS AS INTEGER) < 1) Activity log, Select entries more than SELECT * FROM ACTLOG WHERE - an hour old (CAST((CURRENT_TIMESTAMP - DATE_TIME) \ HOURS AS INTEGER) > 1) Activity log, seek a message number 'Query ACtlog ... MSGno=____' or SELECT MESSAGE FROM ACTLOG WHERE - MSGNO=0988 Seek one less than an hour old: SELECT MESSAGE FROM ACTLOG WHERE - MSGNO=0986 AND - DATE_TIME<(CURRENT_TIMESTAMP-(1 HOUR)) Activity log, seek message text SELECT * FROM ACTLOG WHERE MESSAGE LIKE '%%' Activity log, seek severity messages SELECT * From ACTLOG Where \ in last 2 days (SEVERITY='W' Or SEVERITY='E' Or \ SEVERITY='S' Or SEVERITY='D') And \ (DAYS(CURRENT_TIMESTAMP)- \ DAYS(DATE_TIME)) <2 Or, more efficiently: SELECT * From ACTLOG Where \ SEVERITY In ('W','E','S','D') And \ (DAYS(CURRENT_TIMESTAMP)- \ DAYS(DATE_TIME)) <2 See also: ACTLOG Activity Log, session types In client sessions, you will find: ANE4953I for Archive sessions ANE4954I for Backup sessions Activity log content, query 'Query ACtlog' Activity log pruning (prune) Occurs just after midnite, driven by 'Set ACTlogretention N_Days' value. The first message which always remains in the Activity Log, related to the pruning, are ANR2102I (pruning started) and ANR2103I (pruning completed). The pruning typically takes less than a minute. Activity log retention period, query 'Query STatus', look for "Activity Log Retention Period" Activity log retention period, set 'Set ACTlogretention N_Days' Activity log size, control See: Set ACTlogretention Activity Summary Table See: SUMMARY table ACTLOG The *SM database Activity Log table. Columns: DATE_TIME YYYY-MM-DD hh:mm:ss.000000 (Is the index field). MSGNO The 1-4 digits of the ANR/ANE message number. (Not left-padded with 0; so msg ANR0403I has MSGNO 403.) SEVERITY Equals the last letter of the message number: I Information W Warning E Error S Severe error D Diagnostic (ANR9999D) MESSAGE The full message, including ANR....X or ANE....X message number. ORIGINATOR SERVER for ANR msgs; CLIENT for ANE msgs. NODENAME OWNERNAME SCHEDNAME DOMAINNAME SESSID SERVERNAME SESSION PROCESS If a server process, its number; else null. Note that doing a date-time search can be onerous. You sometimes have to be wily to compensate. Here we efficiently retrieve records less than 3 mins old: select * from actlog where \ DATE(DATE_TIME)=CURRENT_DATE and \ date_time >(current_timestamp - 3 minutes) ACTlogretention See: Set ACTlogretention AD See: Windows Active Directory Adaptec SCSI controller, performance IBM Technote 1167281 notes that some Adaptec controllers may grossly limit the I/O blocksize value, which has a severe impact on performance. Adaptive Differencing A.k.a "adaptive sub-file backup" and "mobile backup", to back up only the changed portions of a file rather than the whole file. Can be used for files of size greater than 1 KB and no greater then 2 GB. (The low-end limit (1024 bytes) was due to some strange behavior with really small files, e.g., if a file started out at 5 k and then was truncated to 8 bytes. The solution was to just send the entire file if the file fell below the 1 KB threshold - no problem since these are tiny files. Initially introduced for TSM4 Windows clients, intended for roaming users needing to back update on laptop computers, over a telephone line. Note that the transfer speed thus varies greatly according to the phone line. See "56Kb modem uploads" for insight. (All 4.1+ servers can store the subfile data sent by the Windows client - providing that it is turned on in the server, via 'Set SUBFILE'.) Limitations: the differencing subsystem in use is limited to 32 bits, meaning 2 GB files. The developers chose 2 GB (instead of 4 GB) as the limit to avoid any possible boundary problems near the 32-bit addressing limit and also because this technology was aimed at the mobile market (read: Who is going to have files on their laptops > 2 GB?). As of 2003 there are no plans to go to 64 bits. Ref: TSM 3.7.3 and 4.1 Technical Guide redbook; Windows client manual; Whitepaper on TSM Adaptive Sub-file Differencing at http://www.ibm.com/ software/tivoli/library/whitepapers/ See also: Delta file; Set SUBFILE; SUBFILE* ADIC Vendor: Advanced Digital Information Corporation - a leading device-independent storage solutions provider to the open systems marketplace. A reseller. www.adic.com AdmCmdResp The "last verb" as seen in dsmadmc sessions running in -consolemode. admdvol TSM's administrative thread for deleting volumes, as via DELete Volume. ADMIN Name of the default administrator ID, from the TSM installation. Admin GUI The product has never had one. There is a command line admin client (dsmadmc), and a web admin client instead. And, more recently available is Administration Center. Administration Center The TSM Administration Center, a (Admin Center) Java-based replacement for the Web Admin interface, new in TSM 5.3. ISC is its base and Administration Center is only a "plug in". Beware that ISC is massive Java. Administration Center pages are made up of portlets, such as properties notebooks or object tables. Install per the TSM Installation Guide for your server platform. Usage is outlined in the TSM Administration Guide manual, with details provided within the Help area of the facility. IBM site search: TSMADMINCENTER Requirements: Technote 1195062, 1410467 FAQ: IBM Technote 1193419 Other Technotes: 1193326; 1193101; 1193443 Download ISC from ftp://ftp.software.ibm.com/storage/ tivoli-storage-management/maintenance/ admincenter/ To give a new Admin Center userid the same level of authority as iscadmin, add the userid to the iscadmin group. See also: ISC; TIP Administration GUI See: Admin GUI Administrative client A program that runs on a file server, workstation, or mainframe. This program allows an ADSM administrator to control and monitor an ADSM server using ADSM administrative commands. Contrast with backup-archive client. Administrative command line interface Beginning with the 3.7 client, the Administrative command line interface is no longer part of the Typical install, in order to bring it in line with the needs of the "typical" TSM user, who is an end user who does not require this capability. If you run a Custom install, you can select the Admin component to be installed. Administrative processes which failed Try 'Query EVent * Type=Administrative EXceptionsonly=Yes'. Administrative schedule A schedule to control operations affecting the TSM server. Note that you can't redirect output from an administrative schedule. That is, if you define an administrative schedule, you cannot code ">" or ">>" in the CMD. This seems to be related to the restriction that you can't redirect output from an Admin command issued from the ADSM console. Experience shows that an admin schedule will not be kicked off if a Server Script is running (at least in ADSMv3). The only restricted commands are MACRO and Query ACtlog, because... MACRO: Macros are valid only from administrative clients. Scheduling of admin commands is contained solely within the server and the server has no knowledge of macros. Query ACtlog: Since all output from scheduled admin commands is forced to the actlog then scheduling a Query ACtlog would force the resulitng output right back to the actlog, thereby doubling the size of the actlog. See: DEFine SCHedule, administrative Administrative schedule, run one time Define the administrative schedule with PERUnits=Onetime. Administrative schedules, disable See: DISABLESCheds Administrative schedules, prevent See: DISABLESCheds Administrative user ID Is created automatically with a node name when REGister Node is performed for the node name, unless USerid=NONE is included on that command line. Administrative Web interface See: Web Admin Administrator A user who is registered with an ADSM server as an administrator. Administrators are assigned one or more privilege classes that determine which administrative tasks they can perform. Administrators can use the administrative client to enter ADSM server commands and queries according to their privileges. Be aware that ADSM associates schedules and other definitions with the administrator who created or last changed it, and that removal or locking of the admin can cause the object to stop operating. In light of this affiliation, it is best for a shop to define a general administrator ID (much like root on a Unix system) which should be used to manage resources having sensitivity to the adminstrator ID. Administrator, add See: Administrator, register Administrator, lock out 'LOCK Admin Admin_Name' See also: Administrators, web, lock out Administrator, password, change 'UPDate Admin Admin_Name PassWord' Administrator, register 'REGister Admin ...' (q.v.) The administrator starts out with Default privilege class. To get more, the 'GRant AUTHority' command must be issued. Administrator, remove 'REMove Admin Adm_Name' But: A schedule which has been updated by an administrator will have that person's name on it (Query SCHedule [Type = Administrative] Format=Detailed). If that administrator is removed, said schedules will no longer run. Further, the admin will not be removeable as long as there are things in the server with that name on them. Administrator, rename 'REName Admin Old_Adm_Name New_Name' Administrator, revoke authority 'REVoke AUTHority Adm_Name [CLasses=SYstem|Policy|STorage| Operator|Analyst] [DOmains=domain1[,domain2...]] [STGpools=pool1[,pool2...]]' Administrator, unlock 'UNLOCK Admin Adm_Name' Administrator, update info or password 'UPDate Admin ...' (q.v.) Administrator files Located in /usr/lpp/adsm/bin/ Administrator passwords, reset Shamefully, some sites lose track of all their administrator passwords, and need to restore administrator access. The one way is to bring the server down and then start it interactively, which is to say implicitly under the SERVER_CONSOLE administrator id. See: HALT; UPDate Admin Administrator privilege classes From highest level to lowest: System - Total authority Policy - Policy domains, sets, management classes, copy groups, schedules. Storage - Manage storage resources. Operator - Server operation, availability of storage media. Analyst - Reset counters, track server statistics. Default - Can do queries. Right out of a 'REGister Admin' cmd, the individual gets Default privilege. To get more, the 'GRant AUTHority' command must be issued. Administrators, query 'Query ADmin * Format=Detailed' Administrators, web, lock out You can update the server options file COMMMethod option to eliminate the HTTP and HTTPS specifications. See also: "Administrator, lock out" for locking out a single administrator. ADP Active-data Pool, new in TSM 5.4. adsm The command used to invoke the standard ADSM interface (GUI), for access to Utilities, Server, Administrative Client, Backup-Archive Client, and HSM Client management. /usr/bin/adsm -> /usr/lpp/adsmserv/ezadsm/adsm. Contrast with the 'dsmadm' command, which is the GUI for pure server administration. ADSM ADSTAR Distributed Storage Manager. Version 1 Release 1 launched July 29, 1993. V2.1 1995 V3.1 1997 Originated in IBM's hardware division, as software allied with its tape drive, medium changer, and then tape library products. It eventually grew to the point where that association was inappropriate and so was moved into the Tivoli software section of IBM, to then become Tivoli Storage Manager. Consisted of Versions 1, 2, and 3 through Release 1. See also: IBM Tivoli Storage Manager; Tivoli Storage Manager; TSM; WDSF ADSM 3.1 Unix client In AIX, ran on AIX 4.1.4, 4.2.1, 4.3, 4.3.1, or 4.3.2. ADSM components installed AIX: 'lslpp -l "adsm*"' See also: TSM components installed ADSM monitoring products ADSM Manager (see http://www.mainstar.com/adsm.htm). Tivoli Decision Support for Storage Management Analysis. This agent program now ships free with TSM V4.1; however you do need a Tivoli Decision Support server. See redbook Tivoli Storage Management Reporting SG24-6109. See also: TSM monitoring products. ADSM origins See: WDSF ADSM server version/release level Revealed in server command Query STatus. Is not available in any SQL table via Select. ADSM usage, restrict by groups Use the "Groups" option in the Client System Options file (dsm.sys) to name the Unix groups which may use ADSM services. See also "Users" option. ADSM.DISKLOG (MVS) Is created as a result of the ANRINST job. You can find a sample of the JCL in the ADSM.SAMPLIB. ADSM.SYS The C:\adsm.sys directory is the "Registry Staging Directory", backed up as part of the system object backup (systemstate and systemservices objects), as the Backup client is traversing the C: DRIVE. The ADSM.SYS directory is to be located on the system drive: there is no option for it being located elsewhere. Specifically, ADSM.SYS is to be located where the Windows System directory (e.g., C:\Windows\system32) is located. Subdirectory adsm.sys\ASR is used for ASR staging. adsm.sys should be excluded from "traditional" incremental and selective backups ("exclude c:\adsm.sys\...\*" is implicit - but should really be "exclude.dir c:\adsm.sys", to avoid timing problems.) Note that backups may report adsm.sys\WMI, adsm.sys\IIS and adsm.sys\EVENTLOG as "skipped": these are not files, but subdirectories. You may employ "exclude.dir c:\adsm.sys" in your include-exclude list to eliminate the messages. (A future enhancement may implicitly do exclude.dir.) For Windows 2003, ADSM.SYS includes VSS metadata, which also needs to be backed up. See: BACKUPRegistry; NT Registry, back up; REGREST ADSM_DD_* These are AIX device errors (circa 1997), as appear in the AIX Error Log. ADSM logs certain device errors in the AIX system error log. Accompanying Sense Data details the error condition. ADSM_DD_LOG1 (0XAC3AB953) DEVICE DRIVER SOFTWARE ERROR Logged by the ADSM device driver when a problem is suspected in the ADSM device driver software. For example, if the ADSM device driver issues a SCSI I/O command with an illegal operation code the command fails and the error is logged with this identifier. ADSM_DD_LOG2 (0X5680E405) HARDWARE/COMMAND-ABORTED ERROR Logged by the ADSM device driver when the device reports a hardware error or command-aborted error in response to a SCSI I/O command. ADSM_DD_LOG3 (0X461B41DE) MEDIA ERROR Logged by the ADSM device driver when a SCSI I/O command fails because of corrupted or incompatible media, or because a drive requires cleaning. ADSM_DD_LOG4 (0X4225DB66) TARGET DEVICE GOT UNIT ATTENTION Logged by the ADSM device driver after receiving a UNIT ATTENTION notification from a device. UNIT ATTENTIONs are informational and usually indicate that some state of the device has changed. For example, this error would be logged if the door of a library device was opened and then closed again. Logging this event indicates that the activity occurred and that the library inventory may have been changed. ADSM_DD_LOG5 (0XDAC55CE5) PERMANENT UNKNOWN ERROR Logged by the ADSM device driver after receiving an unknown error from a device in response to a SCSI I/O cmd. There is no single cause for this: the cause is to be determined by examining the Command, Status Code, and Sense Data. For example, it could be that a SCSI command such as Reserve (X'16') or Release (X'17') was issued with no args (rest of Command is all zeroes). adsmfsm /etc/filesystems attribute, set "true", which is added when 'dsmmigfs' or its GUI equivalent is run to add ADSM HSM control to an AIX file system. Adsmpipe An unsupported Unix utility which uses the *SM API to provide archive, backup, retrieve, and restore facilities for any data that can be piped into it, including raw logical volumes. (In that TSM 3.7+ can back up Unix raw logical volumes, there no need for Adsmpipe to serve that purpose. However, it is still useful for situations where it is inconvenient or impossible to back up a regular file, such as capturing the output of an Oracle Export operation where there isn't sufficient Unix disk space to hold it for 'dsmc i'.) By default, files enter TSM storage with a filespace name of "/pipe" (which can be overridden via -s). Adsmpipe invocation is similar to the 'tar' command. Do 'adsmpipe' (no operands) to see usage. Its options flags are: -A Store in TSM as Archive data. -B Store in TSM as Backup data. This is the default. -c To backup file to the *SM server, where -f is used to specify the arbitrary name to be assigned to the file as it is to be stored in the *SM server. Input comes from Stdin. Messages go to Stderr. -d Delete from TSM storage. -f Mandatory option to specify the name used for the file in the filespace. -l The estimated size of the data, in bytes, as needed for create. -m To specify a management class. -p Change password. -s To specify a filespace name. -t To list previous backup files. Messages go to Stderr. -v Verbose output. -x To restore file from the *SM server. Do not include the filespace name in the -f spec. Output goes to Stdout. Messages go to Stderr. The session will show up as an ordinary backup, including in accounting data. To later query TSM for the backup, do: adsmpipe -tvf There is a surprising amount of crossover between this API-based facility and the standard B/A client: 'dsmc q f' will show the backup as type "API:ADSMPIPE". 'dsmc q ba -su=y /pipe/\*' will show the files. (Oddly, if -su=y is omitted, the files will not be seen.) 'dsmc restore -su=y /pipe/' can restore the file - but the permissions may need adjustment, and the timestamp will likely be wacky: it is best to restore with adsmpipe. Ref: Redbook "Using ADSM to Back Up Databases" (SG24-4335) Redpiece REDP-3980: "Backing Up Databases using ADSMPIPE and the TSM API: Examples Using Linux" To get the software: go to http://www.redbooks.ibm.com/, search on the redpiece number (or "adsmpipe"), and then on its page click Additional Material, whereunder lies the utility. That leads to: ftp://www.redbooks.ibm.com/redbooks/ REDP3980/ .adsmrc (Unix client) The ADSMv3 Backup/Archive GUI introduced an Estimate function. It collects statistics from the ADSM server, which the client stores, by *SM server address, in the .adsmrc file in the user's Unix home directory, or Windows dsm.ini file. Client installation also creates this file in the client directory. Ref: Client manual chapter 3 "Estimating Backup processing Time"; ADSMv3 Technical Guide redbook See also: dsm.ini; Estimate; TSM GUI Preferences adsmrsmd.dll Windows library provided with the TSM 4.1 server for Windows. (Not installed with 3.7, though.) For Removable Storage Management (RSM). Should be in directory: c:\program files\tivoli\tsm\server\ as both: adsmrsm.dll and adsmrsmd.dll Messages: ANR9955W See also: RSM ADSMSCSI Older, ADSM device driver for Windows (2000 and lower), for each disk drive. With the advent of TSM and later Windows incarnations, the supserceding TSMSCSI device driver is used, installing it on each drive now, rather than having one device driver for all the drives. See: TSMSCSI adsmserv.licenses ADSMv2 file in /usr/lpp/adsmserv/bin/, installed with the base server code and updated by the 'REGister LICense' command to contain encoded character data (which is not the same as the hex strings you typed into the command). For later ADSM/TSM releases, see "nodelock". If the server processor board is upgraded such that its serial number changes, the REGister LICense procedure must be repeated - but you should first clear out the /usr/lpp/adsmserv/bin/adsmserv.licenses file, else repeating "ANR9616I Invalid license record" messages will occur. See: License...; REGister LICense adsmserv.lock The *SM server lock file, located in the server directory. It both carries info about the currently running server, and serves as a lock point to prevent a second instance from running. Sample contents: "dsmserv process ID 19046 started Tue Sep 1 06:46:25 1998". If you have multiple TSM servers, necessarily with multiple TSM server directories, then inspecting the lock file is a better way to distinguish which is which than running 'ps' and trying to discern. See also: dsmserv.lock; Servers, multiple AdsmTape Open source software for AIX to allow data on an ADSMv3 tape to be partially recovered directly from the tape, when the ADSM database that described the tape is no longer available. This utility has been replaced by TSMtape (q.v.). ADSTAR An acronym: ADvanced STorage And Retrieval. In the 1992 time period, IBM under John Akers tried spinning off subsidiary companies to handle the various facets of IBM business. ADSTAR was the advanced storage company, whose principal product was hardware, but also created some software to help utilize the hardware they made. Thus, ADSM was originally a software product produced by a hardware company. Lou Gerstner subsequently became CEO, thought little of the disparate sub-companies approach, and re-reorganized things such that ADSTAR was reduced to mostly a name, with its ADSM product now being developed under the software division. ADSTAR Distributed Storage Manager A client/server program product that (ADSM) provides storage management services to customers in a multivendor computer environment. Advanced Device Support license For devices such as a 3494 robotic tape library. Advanced Program-to-Program An implementation of the SNA LU6.2 Communications (APPC) protocol that allows interconnected systems to communicate and share the processing of programs. See Systems Network Architecture Logical Unit 6.2 and Common Programming Interface Communications. Discontinued as of TSM 4.2. afmigr.c Archival migration agent. See also: dfmigr.c AFS Through TSM 5.1, you can use the standard dsm and dsmc client commands on AFS file systems, but they cannot back up AFS Access Control Lists for directories or mount points: use dsm.afs or dsmafs, and dsmc.afs or dsmcafs to accomplish complete AFS backups by file. The file backup client is installable from the adsm.afs.client installation file, and the DFS fileset backup agent is installable from adsm.butaafs.client. In ADSM, use of the AFS/DFS clients required purchase of the Open Systems Environment Support license, for the server to receive the files sent by that client software. The resulting AFS backup filespace will likely show type "API:LFS FILESYSTEM". As of AFS 3.6, AFS itself supports backups to TSM through XBSA (q.v.), meaning that buta will no longer be necessary - and that TSM, as of 5.1, has discontinued development of the now-irrelevant backup functionality in the TSM client. See: http://www.ibm.com/software/stormgmt/ afs/manuals/Library/unix/en_US/HTML/ RelNotes/aurns004.htm#HDRTSM_NEW See also: OpenAFS AES Advanced Encryption Standard. See: Encryption Affinity Perhaps you mean DEFine ASSOCiation. Affinity ID As seen in some ANR9999D messages, but no definition of what it actually is. Problems with Affinity ID relate to TSM clients which are downlevel relative to the TSM server: the client is sending stuff which the server does not like. To correct the problem, the client needs to be upgraded. AFS and TSM 5.x There is no AFS support in TSM 5.x, as there is none specifically in AIX 5.x (AIX 4.3.3 being the latest). This seems to derive from the change in the climate of AFS, where it has gone open-source, thus no longer a viable IBM/Transarc product. AFS backups, delete You can use 'delbuta' to delete from AFS and TSM. Or: Use 'deletedump' from the backup interface to delete the buta dumps from the AFS backup database. The only extra step you need to do is run 'delbuta -s' to synchronize the TSM server. Do this after each deletedump run, and you should be all set. AFS backups, reality Backing up AFS is painful no matter how you do it... Backup by volume (using the *SM replacement for butc) is fast, but can easily consume a LOT of *SM storage space because it is a full image backup every time. To do backup by file properly, you need to keep a list of mount points and have a backup server (or set of clients) that has a lot of memory so that you can use an AFS memory cache - and using a disk cache takes "forever". AFSBackupmntpnt Client System Options file option, valid only when you use dsmafs and dsmcafs. (dsmc will emit error message ANS4900S and ignore the option.) Specifies whether you want ADSM to see a AFS mount point as a mount point (Yes) or as a directory (No): Yes ADSM considers a AFS mount point to be just that: ADSM will back up only the mount point info, and not enter the directory. This is the safer of the two options, but limits what will be done. No ADSM regards a AFS mount point as a directory: ADSM will enter it and (blindly) back up all that it finds there. Note that this can be dangerous, in that use of the 'fts crmount' command is open to all users, who through intent or ignorance can mount parts or all of the local file system or a remote one, or even create "loops". All of this is to say that file-oriented backups of AFS file systems is problematic. See also: DFSBackupmntpt Age factor HSM: A value that determines the weight given to the age of a file when HSM prioritizes eligible files for migration. The age of the file in this case is the number of days since the file was last accessed. The age factor is used with the size factor to determine migration priority for a file. It is a weighting factor, not an absolute number of days since last access. Defined when adding space management to a file system, via dsmhsm GUI or dsmmigfs command. See also: Size factor agent.lic file As in /usr/tivoli/tsm/client/oracle/bin/ Is the TDPO client license file. Lower level servers don't have server side licensing. TSM uses that file to verify on the client side. TDPO will not run without a valid agent.lic file. Aggregate See: Aggregates; Reclamation; Stored Size. Aggregate data transfer rate Statistic at end of Backup/Archive job, reflecting transmission over the full job time, which thus includes all client "think time", file system traversal, and even time the process was out of the operating system dispatch queue. Is calculated by dividing the total number of bytes transferred by the elapsed processing time. Both Tivoli Storage Manager processing and network time are included in the aggregate transfer rate. Therefore, the aggregate transfer rate is lower than the network transfer rate. Contrast with "Network data transfer rate", which can be expected to be a much higher number because of the way it is calculated. Acitivity Log message: ANE4967I Ref: B/A Client manual glossary. Aggregate function SQL: A function, such as Sum(), Count(), Avg(), and Var(), that you can use to calculate totals. In writing expressions and in programming, you can use SQL aggregate functions to determine various statistics on sets of values. Aggregated? In ADSMv3+, a report element from command 'Query CONtent ... Format=Detailed': Reveals whether or not the file is stored in the server in an Aggregate and, if so, the position within the aggregate, as in "11/23". If not aggregated, it will report "No". See also: Segment Number; Stored Size Aggregates Refers to the Small Files Aggregation (aka Small File Aggregation) feature introduced in ADSMv3. During Backup and Archive operations, small files are automatically packaged into larger objects called Aggregates, to be transferred and managed as a whole, thus reducing overhead (database and tape space) and improving performance. An Aggregate is a single file stored at the server, managed as a single object. Aggregates are populated indiscriminately, and may contain file system objects without regard to node, object size, object type, or owner. Space-managed (HSM) files are not aggregated, which lessens HSM performance but eliminates delays. The TSM API certainly supports Aggregation; but Aggregation depends upon the files in a transaction all being in the same file space. TDPs use the API, but often work with very large files, which may each be a separate file space of their own. Hence, you may not see Aggregation with TDPs. But the size of the files means that Aggregation is not an issue for performance. The size of the aggregate varies with the size of the client files and the number of bytes allowed for a single transaction, per the TXNGroupmax server option (transaction size as number of files) and the TXNBytelimit client option (transaction size as number of bytes). Too-small values can conspire to prevent aggregation - so beware using TCPNodelay in AIX. As is the case with files in general, an Aggregate will seek the storage pool in the hierarchy which has sufficient free space to accommodate the Aggregate. An aggregate that cannot fit entirely within a volume will span volumes, and if the break point is in the midst of a file, the file will span volumes. Note that in Reclamation the aggregate will be simply copied with its original size: no effort will be made to construct output aggregates of some nicer size, ostensibly because the data is being kept in a size known to be a happy one for the client, to facilitate restorals. Files which were stored on the server unaggregated (as for example, long-retention files stored under ADSMv2) will remain that way indefinitely and so consume more server space than may be realized. (You can verify with Query CONtent F=D.) Version 2 clients accessing a v3 server should use the QUIET option during Backup and Archive so that files will be aggregated even if a media mount is required. Your Stgpool MAXSize value limits the size of an Aggregate, not the size of any one file in the Aggregate. See also: Aggregated?; NOAGGREGATES; Segment Number Ref: Front of Quick Start manual; Technical Guide redbook; Admin Guide "How the Server Groups Files before Storing" Aggregates and reclamation As expiration deletes files from the server, vacant space can develop within aggregates. For data stored on sequential media, this vacant space is removed during reclamation processing, in a method called "reconstruction" (because it entails rebuilding an aggregate without the empty space). Aggregation, see in database SELECT * FROM CONTENTS WHERE NODE_NAME='UPPER_CASE_NAME' ... In the report: FILE_SIZE is the Physical, or Aggregate, size. The size reflects the TXNBytelimit in effect on the client at the time of the Backup or Archive. AGGREGATED is either "No" (as in the case of HSM, or files Archived or Backup'ed before ADSMv3), or the relative number of the reported file within the aggregate, like "2/16". The value reflects the TXNGroupmax server limit on the number of files in an Aggregate, plus the client TXNBytelimit limiting the size of the Aggregate. Remember that the Aggregate will shrink as reclamation recovers space from old files within the Aggregate. AIT Advanced Intelligent Tape technology, developed by Sony and introduced in 1996 to handle the capacity requirements of large, data-intensive applications. This is video-style, helical-scan technology, wherein data is written in diagonal slashes across the width of the 8mm tape. Like its 8mm predecessor technology, AIT is less reliable than linear tape technologies because AIT tightly wraps the tape around various heads and guides at much sharper angles than linear tape, and its heads are mechanically active, making for vibration and higher wear on the tape, lowering reliability. Data is compressed before being written on the tape, via Adaptive Lossless Data Compression (ALDC - an IBM algorithm), which offers compression averaging 2.6x across multiple data types. Memory-in-Cassette (MIC) feature puts a flash memory chip in with the tape, for remembering file positions or storing a imited amount of data: the MIC chip contains key parameters such as a tape log, search map, number of times loaded, and application info that allow flexible management of the media and its contents. The memory size was 16 MB in AIT-1; is 64 MB in AIT-3. Like DLT, AIT is a proprietary rather than open technology, in contrast to LTO. See: //www.aittape.com/mic.html Cleaning: The technology monitors itself and invokes a built-in Active Head Cleaner as needed; a cleaning cartridge is recommended periodically to remove dust and build-up. Tape type: Advanced Metal Evaporated (AME) Cassette size: tiny, 3.5 inch, 8mm tape. Capacity: 35 GB native. Sony claims their AIT drives of *all* generations achieve 2.6:1 average compression ratio using Adaptive Lossless Data Compression (ALDC), which would yield 90 GB. Transfer rate: 4 MB/s without compression, 10 MB/s with compression (in the QF 3 MB/s is written). Head life: 50,000 hours Media rating: 30,000 passes. Lifetime estimated at over 30 years. AIT is not an open architecture technology - only Sony makes it - a factor which has caused customers to gravitate toward LTO instead. Ref: www.sony.com/ait www.aittape.com/ait1.html http://www.mediabysony.com/ctsc/ pdf/spec_ait3.pdf http://www.tapelibrary.com/aitmic.html http://www.aittape.com/ ait-tape-backup-comparison.html http://www.tape-drives-media.co.uk/sony /about_sony_ait.htm Technology is similar to Mammoth-2. See also: MAM; SAIT AIT-2 (AIT2) Next step in AIT. Capacity: 50 GB native. Sony claims their AIT drives of *all* generations achieve 2.6:1 average compression ratio using Adaptive Lossless Data Compression (ALDC), which would yield 130 GB. Transfer rate: 6 MB/sec max without compression; 15 MB/s with. Technology is similar to Mammoth-2. AIT-3 (AIT3) Next Sony AIT generation - still using 8mm tape and helical-scan technology. Capacity: 100 GB without compression, 260GB with 2.6:1 compression. Transfer rate: 12 MB/sec max without compression; 30 MB/s with. MIC: 64 MB flash memory AIT customers have become disgruntled, finding major reliability problems which cannot be resolved, even after replacing drives. Helical scan technology is great for analog video, but has historically proven ill-suited to the rigors of digital data processing, where linear tracking tape technology is better. AIX 5L, 32-bit client The 32-bit B/A client for both AIX 4.3.3 & AIX 5L is in the package tivoli.tsm.client.ba.aix43.32bit (API client in tivoli.tsm.client.api.aix43.32bit, image client in tivoli.tsm.client.image.aix43.32bit, etc.). Many people seem to be confused by "aix43"-part of the names looking for non-existent *.aix51.32bit packages. AIX client: AIX levels supported TSM LEVEL AIX LEVEL 3.1 4.2 5.1.7 4.3 5.1 4.3.3 (TSM 5.1.7 is the highest for this AIX level) 5.1 (32- or 64-bit kernel) 5.2.0 5.1 (32- or 64-bit kernel) 5.2 (32- or 64-bit kernel) 5.2.2 5.1 (32- or 64-bit kernel) 5.2 (32- or 64-bit kernel) 5.3 5.1 (32- or 64-bit kernel) 5.2 (32- or 64-bit kernel) 5.3 (32- or 64-bit kernel) 5.4 5.2 (32- or 64-bit kernel) 5.3 (32- or 64-bit kernel) 5.5 5.3 (32- or 64-bit kernel) 6.1 (64-bit only) 6.1 5.3 (32- or 64-bit kernel) 6.1 (64-bit only) Note: AIX 6.x is a 64-bit operating system, for modern RS/6000s. AIX device advice Always issue an 'lsdev' command on a device before introducing it to TSM, or when troubleshooting, to assure that it has a status of Available to that operating system: a device which has a status of Defined will be unusable to an application. Where an existing device has been changed, you should follow AIX standard procedures to remove the old definition and replace with the new: this typically resolves to performing an 'rmdev' followed by 'cfgmgr' on the device or its controller. AIX Error Log This is where AIX logs all hardware and software problems. If you are an AIX system person, you need to be familiar with this logging, as it is a vital source of information. Inspection of the log is through the 'errpt' command, most conveniently used as 'errpt -a | less'. AIXASYNCIO TSM 5.1+ server option to allow the use of asynchronous I/O in AIX. AIO enhances system performance by allowing individual non-contiguous I/O requests to be gathered together into one request and written to disk in parallel. Is not enabled by default. To enable it, you need to put "AIXASYNCio YES" into the dsmserv.opt file, enable AIO in the operating system, restart AIX, and then start the TSM server. AIO is turned on in AIX by changing the settings of the aio0 pseudo device driver. This is what is now known as the "legacy" AIO in AIX: as of AIX 5.2 there is also a newer, POSIX version, its device driver called posix_aio0. The two are not interchangeable: they differ in how parameters are passed and in some of the function definitions. You compile to use one or the other. From all we know, TSM uses only the legacy aio0, not posix_aio0. AIXASYNCIO and AIXDIRECTIO notes TSM employs direct I/O for storage pool volumes (not for its database). Further, it "works best" with storage pool files created on a JFS filesystem that is NOT large file enabled. Apparently, AIX usually implicitly disables direct I/O on I/O transactions on large file enabled JFS due to TSM's I/O patterns. To ensure use of direct I/O, you have to use non-large file enabled JFS, which limits your volumes to 2 GB each, which is very restrictive. IBM recommends: AIXDIRECTIO YES AIXSYNCIO NO Asynchronous I/O supposedly has no JFS or file size limitations, but is only used for TSM database volumes. Recovery log and storage pool volumes do not use async I/O. AIX 5.1 documentation mentions changes to the async I/O interfaces to support offsets greater than 2 GB, however, which implies that at least some versions (32-bit TSM server?) do in fact have a 2 GB file size limitation for async I/O. I was unable to get clarity on this point in the PMR I opened. Even if AIX asynchronous I/Os are enabled this does not mean that the TSM server always (only) uses asynchronous I/Os: it does so only in case of having enough workload to warrant doing so: when more than 2 blocks are queued for writing out to disk. TSM server trace BLKDISK can be used to see evidence of Async I/O. The AIXDIRECTIO option is obsoleted in TSM 5.3 because it is always in effect now. (If present in the file, no error message will be issued, at least early in the phase-out.) AIXASYNCIO, verify Do 'Query OPTions' and look for "Async I/O" being set to "Yes". In AIX, do 'lsattr -El aio0' to confirms that asynchronous I/O is enabled. AIXDIRECTIO Removed as of TSM 5.3.1: AIXDIRECTIO NO is now silently ignored. Direct I/O is automatically on for all eligible disk volumes. ALDC Adaptive Lossless Data Compression compression algorithm, as used in Sony AIT-2. IBM's ALDC employs their proprietary version of the Lempel-Ziv compression algorithm called IBM LZ1. Ref: IBM site paper "Design considerations for the ALDC cores". See also: ELDC; LZ1; SLDC ALL-AUTO-LOFS Specification for client DOMain option to say that all loopback file systems (lofs) handled by automounter are to be backed up. See also: ALL-LOFS ALL-AUTO-NFS Specification for client DOMain option to say that all network file systems (lofs) handled by automounter are to be backed up. See also: ALL-NFS ALL-LOCAL The Client User Options file (dsm.opt) DOMain statement default, which may be coded explicitly, to include all local hard drives, excluding /tmp in Unix, and excluding any removeable media drives, such as CD-ROM. Local drives do not include NFS-mounted file systems. Reportedly, USB "thumb" drives are being backed up via ALL-LOCAL, though they should be considered removable media. In 4.1.2 for Windows, its default is to include the System Object (includes Registry, event logs, comp+db, system files, Cert Serv Db, AD, frs, cluster db - depends if pro, dc etc on which of these the system object contains). If you specify a DOMAIN that is not ALL-LOCAL, and want the System Object backed up, then you need to include SYSTEMOBJECT, as in: DOMAIN C: E: SYSTEMOBJECT See also: File systems, local; /tmp ALL-LOFS Specification for client DOMain option to say that all loopback file systems (lofs), except those handled by the automounter, are to be backed up. See also: ALL-AUTO-LOFS ALL-NFS Specification for client DOMain option to say that all network file systems (lofs), except those handled by the automounter, are to be backed up. See also: ALL-AUTO-NFS Allow access to files See: dsmc SET Access ALMS Advanced Library Management System: a feature available on libraries such as the 3584. Alternate pathing (tape drive) Can be achieved for the control path, or the data path for 358* libraries and drives, and 3592 tape drives. In AIX, via the alt_pathing attribute in the Atape device driver, as in: chdev -l rmt1 -a alt_pathing=yes Note that for 358* drives and libraries, a purchasable DPF license key is needed. Ref: Implementing IBM Tape in UNIX Systems redbook Always backup ADSMv3 client GUI backup choice to back up files regardless of whether they have changed. Equivalent to command line 'dsmc Selective ...'. You should normally use "Incremental (complete)" instead, because "Always" redundantly sends to the *SM server data that it already has, thus inflating tape utilization and *SM server database space requirements. Amanda The Advanced Maryland Automatic Network Disk Archiver. A free backup system that allows the administrator of a LAN to set up a single master backup server to back up multiple hosts to a single large capacity tape drive. AMANDA uses native dump and/or GNU tar facilities and can back up a large number of workstations running multiple versions of Unix. Recent versions can also use SAMBA to back up Microsoft Windows 95/NT hosts. http://www.amanda.org/ (Don't expect to find a system overview of Amanda. Documentation on Amanda is very limited.) http://sourceforge.net/projects/amanda/ http://www.backupcentral.com/amanda.html AMENG See also: LANGuage; USEUNICODEFilenames Amount Migrated As from 'Query STGpool Format=Detailed'. Specifies the amount of data, in MB, that has been migrated, if migration is in progress. If migration is not in progress, this value indicates the amount of data migrated during the last migration. When multiple, parallel migration processes are used for the storage pool, this value indicates the total amount of data migrated by all processes. Note that the value can be higher than reflected in the Pct Migr value if data was pouring into the storage pool as migration was occurring. See also: Pct Migr; Pct Util ANE Messages prefix for event logging, which is to say information sent from the TSM Backup/Archive client to the TSM server for logging in the Activity Log and EVENTS table. This happens at the time that message "Results sent to server for scheduled event '_____'." appears in the client scheduler log. See Messages manual. See also notes under: Accounting log aobpswd Pre-TSM4.2 password-setting utility for the TDP for Oracle, replaced by the tdpoconf utility. aobpswd connected to the server specified in the dsm.opt file, to establish an encrypted password in a public file on your client system. This creates a file called TDPO. in the directory specified via the DSMO_PSWDPATH environment variable (or the current directory, if that variable is not set). Thereafter, this file must be readable to anyone running TDPO. Use aobpswd to later update the password. Note that you need to rerun aobpswd before the password expires on the server. Ref: TDP Oracle manual APA AutoPort Aggregation APAR Authorized Program Analysis Report: IBM's terminology for an individual fix to a software product. An APAR consists of a descriptive report as well as the fix, the latter being either an update to a software component, or a complete replacement for it (depending upon the nature of the product). It is common for the APAR to be closed before the fix is out. APARs applied to ADSM on AIX system See: PTFs applied to ADSM on AIX system API Application Programming Interface. The product has historically provided an API for Backup and Archive facilities plus associated queries, providing a library such that programs may directly perform common TSM operations. (There is no API support for HSM.) The API serves as a conduit for data which the programmer gives for sending to the TSM server, which is to say that the API does no I/O of its own. The API is designed to be upwardly compatible: the app you write with today's level of the API will work in years to come. As of 4.1, available for: AS/400, NetWare, OS/2, Unix, Windows. Has historically been provided in both product-proprietary code (dapi*, dsmapi*, libApiDS.a) as well as the X/OPEN interface code (xapi*, libXApi.a) more commonly known as XBSA. The API is largely incompatible with the standard Backup-Archive clients (which are *not* based upon the API). Thus, the API can not be used to access files backed up or archived with the regular Backup-Archive clients. Attempting to do so will yield "ANS4245E (RC122) Format unknown" (same as ANS1245E). Nor can files stored via the API be seen by the conventional clients. Nor can different APIs see each others' files. The only general information that you can query is file spaces and management classes. The CLI provides limited interoperability but the doc stresses that the backup-archive GUI is not supported in connection with objects being stored via the API. The API manual specifically advises that filespaces should be unique to either the API or the B/A client - no mixing. In the API manual, Chapter 4 ("Interoperability"), briefly indicates that the regular command line client can do some things with data sent to the server via the API - but not vice versa. This is frustrating, as one would want to use the API to gain finely controlled access to data backed up by regular clients. Interoperability is limited in the product. Note that there is no administrative API. What TSM functions are supported: - Compression: Controlled by the usual client option COMPRESSIon. - LAN-free support: The TSM API supports LAN-free, as of TSM 4.2. - Encryption: Appeared at the 5.3.0 level. What TSM functions are *not* supported: - RESOURceutilization is not available in the API: that option is used to funnel data at the file level, and the TSM API does not perform any file I/O. Whereas the TDPs are based upon the API, then RESOURceutilization would not pertain to the TDPs. To dispel misimpressions: The API is *not* the basis of the B/A client; but is used by the TDPs. (My guess is that the API is a derivative of common code developed for the B/A client, and only later adapted for the API. This is borne out by fundamental client features, such as encryption, being late to appear in the API. Further, the API is available to platforms (OS/400) where there is no B/A client.) Performance: The APIs typically do not aggregate files as do standard TSM clients. Lack of aggregation is usually not detrimental to performance with APIs, though, in that they are typically used in dealing with a small number of large files. Ref: Client manual "Using the API". Tivoli Field Guide - Tivoli Storage Manager API Essentials. See also: POR API, installing tips When installing or updating the API, you should definitely shut down all users of it - some of whom keep it in memory. API, Windows Note that the TSM API for Windows handles objects as case insensitive but case preserving. This is an anomaly resulting from the fact that SQL Server allows case-sensitive databases names. API and backup sets There is no support in the TSM API for the backupset format. As the Admin Guide manual says: "backup sets can only be used by a backup-archive client". API config file See the info in the "Using the API" manual about configuration file options appropriate to the API. Note that the API config file is specified on the dsmInit call. API header files See: dsmapi*.h API install directory The location of the installation directory has changed over the years... TSM through 5.4: AIX: /usr/tivoli/tsm/client/api/ Win: C:\Program Files\Tivoli\TSM\ TSM 5.5+ (see Technote 1290320): AIX: Win: C:\Program Files\Common Files\ Tivoli\TSM\api ADSM days: AIX: /usr/lpp/adsm/api/ API installed? AIX: There will be a /usr/lpp/adsm/api directory. API tracing See the TSM Problem Determination Guide. Some info in Performance Problem Determination presentation available through Technote 1145012. APPC Advanced Program-to-Program Communications. Discontinued as of TSM 4.2. Application client A software application that runs on a workstation or personal computer and uses the ADSM application programming interface (API) function calls to back up, archive, restore, and retrieve objects. Contrast with backup-archive client. Application Programming Interface A set of functions that application (API) clients can call to store, query, and retrieve data from ADSM storage. Approx. Date Last Read Report line from 'Query Volume', containing both date and hh:mm:ss time of day. Actually reports the last time that the volume was opened for reading. For example, in a MOVe Data or similar volume reclamation, the date/time reflect when the volume was mounted and opened for reading, not when the last block was read, even if that last block read was two hours later. Corresponds to VOLUMES table field LAST_READ_DATE. See also: Number of Times Mounted Approx. Date Last Written Report line from 'Query Volume', containing both date and hh:mm:ss time of day. Reflects when data was last added to the volume; that is, when the last Physical File was written to the volume. (As such, don't expect to find a record of such an event in the Activity Log.) Note that the date does not necessarily reflect when the contained data arrived in TSM storage pools, as such data is subject to reclamation, migration, move data, and other operations which transfer the data to different volumes. Corresponds to VOLUMES table field LAST_WRITE_DATE. See also: Last Update Date/Time AR_COPYGROUPS Archives copy groups table in the TSM database. Columns: DOMAIN_NAME, SET_NAME, CLASS_NAME, COPYGROUP_NAME, RETVER, SERIALIZATION, DESTINATION, CHG_TIME, CHG_ADMIN, PROFILE, RETINIT, RETMIN Arch Archive file type, in Query CONtent report. Other types: Bkup, SpMg ARCHDELete A Yes/No parameter on the 'REGister Node' and 'UPDate Node' commands to specify whether the client node can delete its own archived files from the server. Default: Yes. Its value can be seen in the TSM server command 'Query Node' and the client command 'dsmc Query SEssion'. See also: BACKDELete ArchIns TSM transaction verb for when Archive data is being received from a client. Archive The process of copying files to a long-term storage device. V2 Archive only archived files: it did *not* archive directories, or symbolic links, or special files!!! Just files. Thus, Archive was then not strictly suitable for making file system images. (The V2archive option in modern clients to achieve the same operation.) In Archive, file permissions are retained, including Access Control Lists (ACLs). Symbolic links are followed, to archive the file pointed to by the symlink. As of ADSMv3, directories are archived. However, by virtue of storing the full path to the file, ADSM knew all the directory names, so could recreate directories upon Retrieve, though without full attributes. Archive's emphasis is individual files: in Windows, it cannot be used as a substitute for Backup, because Archive cannot capture consisten system state. Permissions: You can archive any file to which you have read access. That archived image is owned by the user who performed the archive - which is independent of the owner of the file as it sits in the file system. Later retrieval can be performed by the same user, or the superuser. Include/Exclude is not applicable to archiving: just to backups. When you archive a file, you can specify whether to delete the file from your local file system after it is copied to ADSM storage or leave the original file intact. Archive copies may be accompanied by descriptive information, may imply data compression software usage, and may be retrieved by archive date, object name, or description. Windows: "System Object" data (including the Registry) is not archived. Instead, you could use MS Backup to Backup System State to local disk, then use TSM to archive this. Contrast with Retrieve. See also: dsmc Archive; dsmc Delete ARchive; FILESOnly; V2archive For a technique on archiving a large number of individual files, see entry "Archived files, delete from client". Archive, compensate for primitiveness TSM Archive files management is a particular challenge, largely because such archived files are often no longer in the client file system, and thus "invisible" to users until they perform just the right query; and files expire without fanfare, which makes for more of a guessing game. Sadly, IBM has left Archive a rather primitive affair, being about as limited as it was in ADSMv1. So, if you were administering a client system with users doing Archive, how might you improve things? A big, relatively simple step would be to have users perform archive and retrieve through an interface script. If the user does not supply a Description, the script supplies one, to clearly identify the file, like: :Archive Date: which is a considerable improvement over the problematic one which TSM supplies by default. Including the time in the Description renders each object unique. The archiving facility would also recognize that the directory structure had been previously stored, and would not store it again, with this new Description value, as TSM currently does. Formulating the date in hierarchical form (YYYYMMDD) facilitates wildcard searches through an asterisk at the end of YYYY*, or YYYYMM*. For further value, add tracking, at least appending an entry to a flat file in the user's personal directory recording originating system, timestamp, filename, management class (usually, Default), and Description used, which itself could be searched or referenced by the user to see what files had been sent off to the hinterlands. The script could be of particular value if it did the recording asynchronously, in that it could take the further time to do a Query ARchive to capture the "Expires on" value for the object, without delaying the user. Such info might be tracked instead in a MySQL db, or the like...but that would be just one more thing to administer and trouble-shoot. Archive, delete the archived files Use the DELetefiles option (q.v.). Archive, exclude files In TSM 4.1: EXCLUDE.Archive Archive, from Windows, automatic date You can effect this from the DOS command in Description command line, like: dsmc archive c:\test1\ -su=y -DEScription="%date% Test Archive" Archive, latest Unfortunately, there is no command line option to return the latest version of an archived file. However, for a simple filename (no wildcard characters) you can do: 'dsmc q archive ' which will return a list of all the archived files, where the latest is at the bottom, and can readily be extracted (in Unix, via the 'tail -1' command). Archive, long term, issues A classic situation that site technicians have to contend with is site management mandating the keeping of data for very long term periods, as in five to ten years or more. This may be incited by requirements as made by Sarbanes-Oxley. In approaching this, however, site management typically neglects to consider issues which are essential to the data's long-term viability: - Will you be able to find the media in ten years? Years are a long time in a corporate environment, where mergers and relocations and demand for space cause a lot of things to be moved around - and forgotten. Will the site be able to exercise inventory control over long-term data? - Will anyone know what those tapes are for in the future? The purpose of the tapes has to be clearly documented and somehow remain with the tapes - but not on the tapes. Will that doc even survive? - Will you be able to use the media then? Tapes may survive long periods (if properly stored), but the drives which created them and could read them are transient technology, with readability over multiple generations being rare. Likewise, operating systems and applications greatly evolve over time. And don't overlook the need for human knowledge to be able to make use of the data in the future. To fully assure that frozen data and media kept for years would be usable in the future, the whole enviroment in which they were created would essentially have to be frozen in time: computer, OS, appls, peripherals, support, user procedures. That's hardly realistic, and so the long-term viability of frozen data is just as problematic. To keep long-term data viable, it has to move with technology. This means not only copying it across evolving media technologies, but also keeping its format viable. For example: XML today, but tomorrow...what? That said, if long-term archiving (in the generic sense) is needed, it is best to proceed in as "vanilla" a manner as possible. For example, rather than create a backup of your commercial database, instead perform an unload: this will make the data reloadable into any contemporary database. Keep in mind that it is not the TSM administrator's responsibility to assure anything other than the safekeeping of stored data. It is the responsibility of the data's owners to assure that it is logically usable in the future. See "TSM for Data Retention": that product facilitates long-term retention in several ways, including moving data to new recording technology over time. Archive, prevent client from doing See: Archiving, prohibit Archive, preview See the PREview command in TSM 5.3+. Archive, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Archive a file system 'dsmc archive -SUbdir=Yes /fsname/' Archive and directories As of ADSMv3, Archive will send the housing directory to the TSM server, along with the file being archived. However, it is important to understand how this operates, in order to avoid later problems... When Archive is invoked, it identifies the directory path portion of the file path and then queries the TSM server to first determine if that directory already exists in TSM server storage, per its HL and LL and description. By default, the client employs a year, month, day datestamp in the description, which assures that there will be at least one directory image sent that day. If, however, you employ a fixed description string, such as "Sara Clark employee info" when archiving this file periodically over the years, you will never send more than one directory instance to the TSM server, meaning that the one that was initially sent will be the only one in TSM storage, and has the retention values that prevailed back then. In later archiving of the same file, you may employ a longer retention value. In this scenario, it is then possible for the early file *and* directory to expire, leaving just the later images of the file in TSM storage, with no directory. If you then go to query or retrieve the file after its directory expires and befor the next Archive is performed, you can understandably then have problems. The upshot of this is that you need to avoid archiving objects where the Description is always the same, but management classes or retention values change. See also: dsmc Delete ARchive and directories Archive and Migration If a disk Archive storage pool fills, ADSM will start a Migration to tape to drain it; but because the pool filled and there is no more space there, the active Archive session wants to write directly to tape; but that tape is in use for Migration, so the client session has to wait. Archive archives nothing A situation wherein you invoke Archive like 'dsmc arch "/my/directory/*"' and nothing gets archived. Possible reasons: - /my/directory/ contains only subdirectories, no files; and the subdirectories had been archived in previous Archive operations. - You have EXCLUDE.ARCHIVE statements which specifies the files in this directory. Archive Attribute In Windows, an advanced attribute of a file, as seen under file Properties, Advanced. It is used by lots of other backup software to define if a file was already backed up, and if it has to be backed up the next time. As of TSM 5.2, the Windows client provides a RESETARCHIVEATTRibute option for resetting the Windows archive attribute for files during a backup operation. As the Windows Client manual says, TSM does not use the Windows Archive Attribute to determine if a file is a candidate for incremental backup, but only manipulates this attribute for reporting purposes. See also: RESETARCHIVEATTRibute Archive bit See: Archive Attribute Archive copy An object or group of objects residing in an archive storage pool in ADSM storage. Archive Copy Group A policy object that contains attributes that control the generation, destination, and expiration of archived copies of files. An archive copy group is stored in a management class. Archive Copy Group, define 'DEFine COpygroup DomainName PolicySet MGmtclass Type=Archive DESTination=PoolName [RETVer=N_Days|NOLimit] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' Archive descriptions Descriptions are supplementary identifiers which assist in uniquely identifying archive files. Descriptions are stored in secondary tables, in contrast to the primary archive table entries which store archive directory and file data information. If you do not code -DEScription when performing an Archive, the TSM client renders the default description: Archive Date: 2002/05/03 where the date portion is the year, month, and day, and is always 10 characters long (even if it may sometimes be YY/MM/DD rather than include the four-digit year in form YYYY/MM/DD. Note that the 'UPDate ARCHIve' server command has a RESETDescriptions option which will result in neutering archive descriptions and cause all but one of the archive entries in a directory to go away. See also: -DEScription="..." Archive directories, delete The 'UPDate ARCHIve' server command can delete all archive directory objects - a rather drastic operation. Archive directory An archive directory is defined to be unique by: node, filespace, directory/level, owner and description. See also: Archive and directories; CLEAN ARCHDIRectories Archive drive contents Windows: dsmc archive d:\* -subdir=yes Archive fails on single file Andy Raibeck wrote in March 1999: "In the case of a SELECTIVE backup or an ARCHIVE, if one or more files can not be backed up (or archived) then the event will be failed. The rationale for this is that if you ask to selectively back up or archive one or more files, the assumption is that you want each and every one of those files to be processed. If even one file fails, then the event will have a status of failed. So the basic difference is that with incremental we expect that one or more files might not be able to be processed, so we do not flag such a case as failed. In other cases, like SELECTIVE or ARCHIVE, we expect that each file specified *must* be processed successfully, or else we flag the operation as failed." Archive files, how to See: dsmc Archive Archive logging See: AUDITLOGGing Archive operation, retry when file in Have the CHAngingretries (q.v.) Client use System Options file (dsm.sys) option specify how many retries you want. Default: 4. Archive retention grace period The number of days ADSM retains an archive copy when the server is unable to rebind the object to an appropriate management class. Defined via the ARCHRETention parameter of 'DEFine DOmain'. Archive retention grace period, query 'Query DOmain Format=Detailed', see "Archive Retention (Grace Period)". Archive retention period, change See: Retention period for archived files, change Archive root (/) file system 'dsmc archive / -SUbdir=Yes' Archive storage pool, keep separate It is best to keep your Archive storage pool separate from others (Backup, HSM) so that restorals can be done more quickly. If Archive data was in the same storage pool as Backups, there would be a lot of unrelated data for the restoral to have to skip over. Archive users SELECT DISTINCT OWNER FROM ARCHIVES [WHERE node_name='UpperCase'] SELECT NODE_NAME,OWNER,TYPE,COUNT(*) AS "Number of objects" FROM ARCHIVES WHERE NODE_NAME='____' or NODE_NAME='____' GROUP BY NODE_NAME,OWNER,TYPE Archive users, files count SELECT OWNER,count(*) AS "Number of files" FROM ARCHIVES WHERE NODE_NAME='UPPER_CASE_NAME' GROUP BY OWNER Archive vs. Backup Archive is intended for the long-term storage of individual files on tape, while Backup is for safeguarding the contents of a file system to facilitate the later recovery of any part of it. Returning files to the file system en mass is thus the forte of Restore, whereas Retrieve brings back individual files as needed. Retention policies for Archive files is rudimentary, whereas for Backups it is much more comprehensive. At initiation time, a prominent difference between the two is that to back up a file, you must be its owner, whereas to archive it, you need only have read access to it. See also: http://www.storsol.com/cfusion /template.cfm?page1=wp_whyaisa&page2= blank_men Archive vs. Selective Backup, The two are rather similar; but... differences The owner of a backup file is the user whose name is attached to the file, whereas the owner of an archive file is the person who performed the Archive operation. Frequency of archive is unrestricted, whereas backup can be restricted. Retention rules are simple for archive, but more involved for backup. Archive files are deleteable by the end user; Backup files cannot be selectively deleted. ADSMv2 Backup would handle directories, but Archive would not: in ADSMv3+, both Backup and Archive handle directories. Retrieval is rather different for the two: backup allows selection of old versions by date; archive distinction is by date and/or the Description associated with the files. ARCHIVE_DATE Column in *SM server database ARCHIVES table recording when the file was archived. Format: YYYY-MM-DD HH:MM:SS.xxxxxx Example: SELECT * FROM ARCHIVES WHERE ARCHIVE_DATE> '1997-01-01 00:00:00.000000' AND ARCHIVE_DATE< '1998-12-31 00:00:00.000000' Archived copy A copy of a file that resides in an ADSM archive storage pool. Archived file, change retention? See: Retention period for archived files, change Archived files, count SELECT COUNT(*) AS "Count" FROM ARCHIVES WHERE NODE_NAME='' Archived files: deletable by client Whether the client can delete archived node? files now stored on the server. Controlled by the ARCHDELete parameter on the 'REGister Node' and 'UPDate Node' commands. Default: Yes. Query via 'Query Node Format=Detailed'. Archived files, delete from client Via client command: 'dsmc Delete ARchive FileName(s)' (q.v.) You could first try it on a 'Query ARchive' to get comfortable. Archived files, list from client See: dsmc Query ARchive Archived files, list from server 'SHow Archives NodeName FileSpace' Archived files, list from server, 'Query CONtent VolName ...' by volume Archived files, rebinding does not From the TSM Admin. manual, chapter on occur Implementing Policies for Client Data, topic How Files and Directories Are Associated with a Management Class: "Archive copies are never rebound because each archive operation creates a different archive copy. Archive copies remain bound to the management class name specified when the user archived them." (Reiterated in the client B/A manual, under "Binding and Rebinding Management Classes to Files".) Beware, however, that changing the retention setting of a management class's archive copy group will cause all archive versions bound to that management class to conform to the new retention. Note that you can use an ARCHmc to specify an alternate management class for the archive operation. Archived files, report by owner As of ADSMv3 there is still no way to do this from the client. But it can be done within the server via SQL, like: SELECT OWNER, FILESPACE_NAME, TYPE, ARCHIVE_DATE FROM ARCHIVES WHERE NODE_NAME='UPPER_CASE_NAME' - AND OWNER='joe' Archived files, report by year Example: SELECT * FROM ARCHIVES WHERE YEAR(ARCHIVE_DATE)=1998 Archived files, retention period Is part of the Copy Group definition. Is defined in DEFine DOmain to provide a just-in-case default value. Note that there is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Archived files, retention period, set The retention period for archive files is set via the "RETVer" parameter of the 'DEFine COpygroup' ADSM command. Can be set for 0-9999 days, or "NOLimit". Default: 365 days. Archived files, retention period, See: Retention period for archived update files, change Archived files, retention period, See: 'Query COpygroup ... Type=Archive' query Archived files, retrieve from client Via client dsmc command: 'RETrieve [-DEScription="..."] [-FROMDate=date] [-TODate=date] [-FROMOwner=owner] [-FROMNode=node] [-PIck] [-Quiet] [-REPlace=value] [-SErvername=StanzaName] [-SUbdir=No|Yes] [-TAPEPrompt=value] OrigFileName(s) [NewFileName(s)]' Archived files don't show up Some users have encountered the unusual problem of having archived files, and know they should not yet have expired, but the archived files do not show up in a client query, despite being performed from the owning user, etc. Analysis with a Select on the Archives table revealed the cause to be directories missing from the server storage pools, which prevented hierarchically finding the files in a client -subdir query. The fix was to re-archive the missing directories. Use ARCHmc (q.v.) to help avoid problems. See also: ANS1302E ARCHIVES SQL: *SM server database table containing basic information about each archived object (but not its size). Along with BACKUPS and CONTENTS, constitutes the bulk of the *SM database contents. Columns: NODE_NAME, FILESPACE_NAME, FILESPACE_ID, TYPE (DIR, FILE), HL_NAME, LL_NAME, OBJECT_ID (like 222414213), ARCHIVE_DATE, OWNER, DESCRIPTION (by default, "Archive Date: MM/DD/YY"), CLASS_NAME. Note that the SQL table which customers see as ARCHIVES is actually composed of internal tables Archive.Objects and Archive.Descriptions, the latter being a secondary table created to improve performance in Retrieve operations. Note that this may be a very large table in sites which do a lot of TSM Archiving. A Select on it can take a very long time, if very specific Where options are not employed in the search. See also: HL_NAME; LL_NAME Archiving, prohibit Prohibit archiving by employing one of the following: In the *SM server: - LOCK Node, which prevents all access from the client - and which may be too extreme. - ADSMv2: Do not define an archive Copy Group in the Management Class used by that user. This causes the following message when trying to do an archive: ANS5007W The policy set does not contain any archive copy groups. Unable to continue with archive. - ADSMv3: Code NOARCHIVE in the include-exclude file, as in: "include ?:\...\* NOARCHIVE" which prevents all archiving. - 'UPDate Node ... MAXNUMMP=0', to be in effect during the day, to prevent Backup and Archive to tape, but allow Restore and Retrieve. In the *SM client: - Make the client a member of a domain and policy set which has no archive copygroup. - Employ EXCLUDE.ARCHIVE for the subject area. For example, you want to prevent your client system users from archiving files that are in file system /fs1: EXCLUDE.ARCHIVE /fs1/.../* Attempts to archive will then get: ANS1115W File '/fs1/abc/xyz' excluded by Include/Exclude list Retrieve and Delete Archive continue to function as usual. ARCHmc (actually, -ARCHmc=_____) Archive option, to be specified on the 'dsmc archive' command line (only), to select a Management Class and thus override the default Management Class for the client Policy Domain. (ADSM v3.1 allowed it in dsm.opt; but that's not the intention of the option.) Default: the Management Class in the active Policy Set. See "Archive files, how to" for example. As of ADSMv3.1 mid-1999 APAR IX89638 (PTF 3.1.0.7), archived directories are not bound to the management class with the longest retention (RETOnly) as is the rule for backups. See also: CLASS_NAME; dsmBindMC; Query COpygroup ArchQry TSM transaction verb for when Query Archive is being performed by a client. ARCHRETention Parameter of 'DEFine DOmain' to specify the retention grace period for the policy domain, to protect old versions from deletion when the respective archive copy group is not available. Specified as the number of days (from date of archive) to retain archive copies. Default: 365 (days) ARCserve Competing product from Computer Associates, to back up Microsoft Exchange Server mailboxes. Advertises the ability to restore individual mailboxes, but what they don't tell you is that they do it in a non-Microsoft supported way: they totally circumvent the MS Exchange APIs. The performance is terrible and the product as a whole has given customers lots of problems. See also: Tivoli Storage Manager for Mail ARCHSYMLinkasfile Client option for use with Archive, as of ADSMv3 PTF 7. If you specify ARCHSYMLinkasfile=No then symbolic links will not be followed: the symlink itself will be archived. If you specify ARCHSYMLinkasfile=Yes (the default), then symbolic links will be followed in order to archive the target files. Unrelated: See also FOLlowsymbolic ARTIC 3494: A Real-Time Interface Coprocessor. This card in the industrial computer within the 3494 manages RS-232 and RS-422 communication, as serial connections to a host and command/feedback info with the tape drives. A patch panel with eight DB-25 slots mounted vertically in the left hand side of the interior of the first frame connects to the card. AS SQL clause for assigning an alias to a report column header title, rather than letting the data name be the default column title or expression used on the column's contents. The alias then becomes the column name in the output, and can be referred to in GROUP BY, ORDER BY, and HAVING clauses - but not in a WHERE clause. The title string should be in double quotes. Note that if the column header widths in combination exceed the width of the display window, the output will be forced into "Title: Value" format. Sample: SELECT VOLUME_NAME AS - "Scratch Vols" FROM LIBVOLUMES WHERE STATUS='Scratch' results in output like: Scratch Vols ------------------ 000049 000084 See also: -DISPLaymode AS/400 There is no conventional TSM B/A client. Instead, there is BRMS, which utilizes the TSM API. See: http://www.ibm.com/systems/i/support /brms/adsmclnt.html For general info: http://www.ibm.com/systems/i/ ASC SQL: Ascending order, in conjunction with ORDER BY, as like: GROUP BY NODE_NAME ORDER BY NODE_NAME ASC See also: DESC ASC/ASCQ codes Additional Sense Codes and Additional Sense Code Qualifiers involved in I/O errors. The ASC is byte 12 of the sense bytes, and the ASCQ is byte 13 (as numbered from 0). They are reported in hex, in message ANR8302E. ASC=29 ASCQ=00 indicates a SCSI bus reset. Could be a bad adapter, cable, terminator, drive, etc.). The drives could be causing an adapter problem which in turn causes a vus reset, or a problematic adapter could be causing the bus reset that causes the drive errors. ASC=3B ASCQ=0D is "Medium dest element full", which can mean that the tape storage slot or drive is already occupied, as when a library's inventory is awry. Perform a re-inventory. ASC=3B ASCQ=0E is "Medium source element empty", saying that there is no tape in the storage slot as there should be, meaning that the library's inventory is awry. Perform a re-inventory. See Appendix B of the Messages manual. See also: ANR8302E ASR Automated System Recovery - a restore feature of Windows XP Professional and Windows Server 2003 that provides a framework for saving and recovering the Windows XP or Windows Server 2003 operating state, in the event of a catastrophic system or hardware failure. Unlike image-based recovery, by using ASR the harware does not have to be wholly identical: "The hardware configuration of the target system must be identical to that of the original system, with the exception of the hard disks, video cards, and network interface cards." TSM creates the files required for ASR recovery and stores them on the TSM server. In the backup, TSM will generate the ASR files in the :\adsm.sys\ASR staging directory on your local machine and store these these files in the ASR file space on the TSM server. ASR is a two phase process: First, Windows installs a temporary operating system image using the original operating system media; second, Windows invokes TSM to restore the system volume and system state information. Ref: Windows B/A Client manual, Appendix F "ASR supplemental information"; Redbook "TSM BMR for Windows 2003 and XP"; Tivoli Field Guide on the subject (IBM site White Paper 7003812) Msgs: ANS1468E Image-based recovery is another approach, using Windows PE or BartPE. ASSISTVCRRECovery Server option to specify whether the ADSM server will assist the 3570/3590 drive in recovering from a lost or corrupted Vital Cartridge Records (VCR) condition. If you specify Yes (the default) and if TSM detects an error during the mount processing, it locates to the end-of-data during the dismount processing to allow the drive to restore the VCR. During the tape operation, there may be some small effect on performance because the drive cannot perform a fast locate with a lost or corrupted VCR. However, there is no loss of data. See also: VCR ASSISTVCRRECovery, query 'Query OPTions', see "AssistVCRRecovery" Association Server-defined chedules are associated with client nodes so that the client will be contacted to run them in a client-server arrangement. See 'DEFine ASSOCiation', 'DELete ASSOCiation'. ASSOCIATIONS SQL table in the TSM server reflecting client associations with schedules, as established with 'DEFine ASSOCiation'. Columns: DOMAIN_NAME, SCHEDULE_NAME, NODE_NAME, CHG_TIME, CHG_ADMIN Note that if there is no association between an existing schedule and a node, then there is no entry in the table for it. In contrast, Query ASSOCiation will report schedules having no associations, because that is good to know - and reveals a distinction between Query commands and tables. See also: Query ASSOCiation Asynchronous I/O As via the AIXASYNCIO server option. Improves throughput and reduces I/O waiting on AIX servers. Enables writing to DASD by TSM for its database, recovery log, and storage pools without having to wait for completion before additional writing can be initiated by the server. (Think: write-ahead.) See: AIXASYNCIO Atape Moniker for the AIX (pSeries) tape device driver, for both the tape drive and medium changer drivers. Formerly called Magstar tape driver. Supports 3590, 3580, 3570, 3592. (Contrast with the IBMTape device driver for Solaris, Linux, Windows.) Maintained by IBM hardware division. As such, if you have to call IBM about a problem with the driver, open a hardware problem, *not* a software problem with IBM. The Atape driver can be downloaded from IBM's FixCentral... http://www-933.ibm.com/support/fixcentral/ Product Group: System Storage Product Family: Tape Systems Product Type: Tape drivers and software Product: Tape device drivers Platform (select one) (Note that IBM does *not* provide an archive of past versions of Atape: expect to find the latest one, and one earlier. You may want to periodically get the current version and accumulate a library of versions at your site, just in case.) In AIX, is installed in /usr/lpp/Atape/. Sometimes, Atape will force you to re-create the TSM tape devices; and a reboot may be necessary (as in the Atape driver rewriting AIX's bosboot area): so perform such upgrades off hours. Support: IBM device drivers are written by their Hardware division. If you need to contact IBM for problems with such device drivers, submit a hardware problem report...but don't expect to get much help as a result. See also: atdd; IBMtape Atape header file, for programming AIX: /usr/include/sys/Atape.h Solaris: /usr/include/sys/st.h HP-UX: /usr/include/sys/atdd.h Windows: , Atape level 'lslpp -ql Atape.driver' atdd The device driver used for HP-UX UNIX platforms, supplied by IBM. It includes both the tape drive and medium changer drivers. Download from: ftp.software.ibm.com /storage/devdrvr/HPUX/ See also: Atape; IBMtape atime See: Access time; Backup ATL Automated Tape Library: a frame containing tape storage cells and a robotic mechanism which can respond to host commands to retrieve tapes from storage cells and mount them for reading and writing. atldd Moniker for the 3494 library device driver, "AIX LAN/TTY: Automated Tape Library Device Driver", software which comes with the 3494 on floppy diskettes. Is installed in /usr/lpp/atldd/. Download from: ftp://service.boulder.ibm.com/storage/ devdrvr/ or ftp://ftp.software.ibm.com/storage/ devdrvr/ Support: IBM device drivers are written by their Hardware division. If you need to contact IBM for problems with such device drivers, submit a hardware problem report...but don't expect to get much help as a result. See also: LMCP atldd Available? 'lsdev -C -l lmcp0' atldd level 'lslpp -ql atldd.driver' ATS IBM Advanced Technical Support. They host "Lunch and Learn" conference call seminars ATTN messages (3590) Attention (ATTN) messages indicate error conditions that customer personnel may be able to resolve. For example, the operator can correct the ATTN ACF message with a supplemental message of Magazine not locked. Ref: 3590 Operator Guide (GA32-0330-06) Appendix B especially. Attribute See: Volume attributes Attributes of tape drive, list AIX: 'lsattr -EHl rmt1' or 'mt -f /dev/rmt1 status' AUDit DB Undocumented (and therefore unsuported) server command in ADSMv3+, ostensibly a developer service aid, to perform an audit on-line (without taking the server down). Syntax (known): 'AUDIT DB [PARTITION=partion-name] [FIX=Yes]' e.g. 'AUDIT DB PARTITION=DISKSTORAGE' as when a volume cannot be deleted. See also: dsmserv AUDITDB AUDit LIBRary Creates a background process which (as in verifying 3494's volumes) checks that *SM's knowledge of the library's contents are consistent with the library's inventory. This is a bidirectional synchronization task, where the TSM server acquires library inventory information and may subsequently instruct the library to adjust some volume attributes to correspond with TSM volume status info. Syntax: 'AUDit LIBRary LibName [CHECKLabel=Yes|Barcode]' CHECKLabel pertains only to libraries which have no form of embedded library manager database, where the only way to verify tapes is for the robotic scanner to go look at them. The 3494 and 3584 libraries maintain an internal database, where TSM merely has to ask the library for its inventory information. (The Reference Manual fails to explain this.) The Barcode check was added in the 2.1.x.10 level of the server to make barcode checking an option rather than the implicit default, due to so many customers having odd barcodes (as in those with more than 6-char serials). Also, using CHECKLabel=Barcode greatly reduces time by eliminating mounts to read the header on the tapes - which is acceptable if you run a tight ship and are confident of barcodes corresponding with internal tape labeling. Sample: 'AUDit LIBRary OURLIB'. The audit needs to be run when the library is not in use (no volumes mounted): if the library is busy, the Audit will likely hang. Runtime: Probably not long. One user with 400 tapes quotes 2-3 minutes. Note that this audit is performed automatically when the server is restarted (no known means of suppressing this) - which explains why a restart takes so long. Tip: With a 3494 or comparable library, you may employ the 'mtlib' command to check the category codes of the tapes in the library for reasonableness, and possibly use the 'mtlib' command to adjust errant values without resorting to the disruption of an AUDit LIBRary. In a 349X library, AUDit LIBRary will instruct the library to restore Scratch and Private category codes to match TSM's libvolumes information. This is a particularly valuable capability for when library category codes have been wiped out by an inadvertent Teach or Reinventory operation at the library (which resets category codes to Insert). What this does *not* do: This function is for volume consistency, and does not delve into volume contents, and thus cannot help recover inventory info where the TSM db has been lost. AUDit LIBRary, in Library Manager env In a Library Manager, Library Client environment, it's important to appreciate how AUDit LIBRary works... When performed on the Library Manager, AUDit LIBRary will sync that TSM server's libvolumes list with the reality of the tape library. When performed on the Library Client, AUDit LIBRary will sync that TSM server's libvolumes list with the volumes in the Library Manager database which are designated as being owned by that Library Client server. The LM and LC inventories can get out of sync due to inter-server communication problems or software defects. AUDit LICenses *SM server command to start a background process which both audits the data storage used by each client node and licensing features in use on the server. It pursues filespace data to gather its informatiobn. This process then compares the storage utilization and other licensing factors to the license terms that have been defined to the server to determine if the current server configuration is in compliance with the license terms. The AUDITSTorage server option is available to omit the storage calculation portion of the operation, to reduce server overhead. There is no "Wait" capability, so use with server scripts is awkward. Syntax: 'AUDit LICenses'. Will hopefully complete with messages ANR2825I License audit process NNNN completed successfully - N nodes audited ANR2811I Audit License completed - Server is in compliance with license terms. You may instead find: "ANR2841W Server is NOT IN COMPLIANCE with license terms." and 'Query LICense' reports: Server License Compliance: FAILED Must be done before running 'Query AUDITOccupancy' for its output to show current values. Note that the time of the audit shows up in Query AUDITOccupancy output. Msgs: ANR2812W; ANR2834W; ANR2841W; ANR0987I See also: Auditoccupancy; AUDITSTorage; License...; Query LICense; REGister LICense; Set LICenseauditperiod; SHow LMVARS AUDIT RECLAIM Command introduced in v3.1.1.5 to fix a bug introduced by the 3.1.0.0 code. See also: RECLAIM_ANALYSIS AUDit Volume TSM server command to audit a primary or copy storage pool volume, and optionally fix inconsistencies. If a disk volume, it must be online; if a tape volume, it will be mounted (unless TSM realizes that it contains no data, as when you are trying to fix an anomaly). Tape Access Mode must not be "offsite", else get msg ANR2425E. If so, try changing to ACCess=READOnly, then repeat the Audit, which traditionally works. What this does is validate file information stored in the database with that stored on the tape. It does this by reading every byte of every file on the volume and checks control information which the server imbeds in the file when it is stored. The same code is used for reading and checking the file as would be used if the file were to be restored to a client. (In contrast, MOVe Data simply copies files from one volume to another. There are, however, some conditions which MOVe Data will detect which AUDit Volume will not.) If a file on the volume had previously been marked as Damaged, and Audit Volume does not detect any errors in it this time, that file's state is reset. AUDit Volume is a good way to fix niggly problems which prevent a volume from finally reaching a state of Empty when some residual data won't otherwise disappear. Syntax: 'AUDit Volume VolName [Fix=No|Yes] [SKIPPartial=No|Yes] [Quiet=No|Yes]'. "Fix=Yes" will delete unrecoverable files from a damaged volume (you will have to re-backup the files). Caution: Do not use AUDit Volume on a problem disk volume without first determining, from the operating system level, what the problem with the disk actually is. Realize that a disk electronics problem can make intact files look bad, or inconsistently make them look bad. What goes on: The database governs all, and so location of the files on the tape is necessarily controlled by the current db state. That is to say, Audit Volume positions to each next file according to db records. At that position, it expects to find the start of a file it previously recorded on the medium. If not (as when the tape had been written over), then that's a definite inconsistency, and eligible for db deletion, depending up Fix. The Audit reads each file to verify medium readability. (The Admin Guide suggests using it for checking out volumes which have been out of circulation for some time.) Medium surface/recording problems will result in some tape drives (e.g., 3590) doggedly trying to re-read that area of the tape, which will entail considerable time. A hopeless file will be marked Damaged or otherwise handled according to the Fix rules. If Audit cannot repair the medium problem: you can thereafter do a Restore Volume to logically fix it. Whether the medium itself is bad is uncertain: there may indeed be a bad surface problem or creasing in the tape; but it might also be that the drive which wrote it did so without sufficient magnetic coercivity, or the coercivity of the medium was "tough", or tracking was screwy back then - in which case the tape may well be reusable. Exercise via tapeutil or the like is in order. Audit Volume has additional help these days: the CRCData Stgpool option now in TSM 5.1, which writes Cyclic Redudancy Check data as part of storing the file. This complements the tape technology's byte error correction encoding to check file integrity. Ref: TSM 5.1 Technical Guide redbook DR note: Audit Volume cannot rebuild *SM database entries from storage pool tape contents: there is no capability in the product to do that kind of thing. During operation, Query PRocess will show like: Volume 003798 (storage pool STGP_BACKUP_MAIL_3592), Files Processed: 0, Damaged Files Deleted: 0, Partial Files Skipped: 0. Current Physical File (bytes): 15,524,161 Current input volume: 003798. Msgs: ANR2333W, ANR2334W See also: dsmserv AUDITDB AUDit Volume performance Will be impacted if CRC recording is in effect. AUDITDB See: 'DSMSERV AUDITDB' AUDITLOGGing TSM 5.5+ client option to generate an audit log which contains an entry for each file that is processed during an incremental, selective, archive, restore, or retrieve operation. Syntax: AUDITLOGGing Off|Basic|Full to turn it on/off and select thoroughness. AUDITOCC SQL: TSM database table housing the data that Query AUDITOccupancy reports, which reports storage pool occupancy numbers (differently from Query OCCupancy). Columns: NODE_NAME, BACKUP_MB, BACKUP_COPY_MB, ARCHIVE_MB, ARCHIVE_COPY_MB, SPACEMG_MB, SPACEMG_COPY_MB, TOTAL_MB This table includes primary and copy storage pool numbers, separated, in contrast to 'Query AUDITOccupancy', which reports them combined. The MB values reflect Physical space (the size of Aggregates of files) rather than Logical space (files surviving within Aggregates). The *_COPY_MB reflect the amount of that type of data in all copy storage pools, regardless of onsite and/or offsite. Be sure to run 'AUDit LICenses' before reporting from it (as is also required for 'Query AUDITOccupancy'). The Audit Occupancy table is a current-state data source. If you want an interval or event based data source, utilize the SUMMARY table or accounting records. See also: AUDITSTorage; Copy Storage Pools current?; OCCUPANCY; Query AUDITOccupancy AUDITSTorage TSM server option. As part of a license audit operation, the server calculates, by node, the amount of server storage used for backup, archive, and space-managed files. For servers managing large amounts of data, this calculation can take a great deal of CPU time and can stall other server activity. You can use the AUDITSTorage option to specify that storage is not to be calculated as part of a license audit. Note: This option was previously called NOAUDITStorage. Syntax: "AUDITSTorage Yes|No" Yes Specifies that storage is to be calculated as part of a license audit. This is the default. No Specifies that storage is not to be calculated as part of a license audit. (Expect this to impair the results from Query AUDITOccupancy) Authentication The process of checking and authorizing a user's password before allowing that user access to the ADSM server. (Password prompting does not occur if PASSWORDAccess is set to Generate.) Authentication can be turned on or off by an administrator with system privilege. See also: Password security Authentication, query 'Query STatus' Authentication, turn off 'Set AUthentication OFf' Authentication, turn on 'Set AUthentication ON' The password expiration period is established via 'Set PASSExp NDays' (Defaults to 90 days). Authorization Rule A specification that allows another user to either restore or retrieve a user's objects from ADSM storage. Authorize access to files See: dsmc SET Access Authorized User In the TSM Client for Unix: any user running with a real user ID of 0 (root), or who owns the TSM executable with the owner execution permission bit set to s. Auto Fill 3494 device state for its tape drives: pre-loading is enabled, which will keep the ACL index stack filled with volumes from a specified category. See /usr/include/sys/mtlibio.h Auto Migration, manually perform for 'dsmautomig [FSname]' file system (HSM) Auto Migrate on Non-Usage In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which specifies the number of days since a file was last accessed before it is eligible for automatic migration. Defined via AUTOMIGNOnuse in management class. See: AUTOMIGNOnuse Auto-sharing See: 3590 tape drive sharing AUTOFsrename Macintosh and Windows clients option controlling the automatic renaming of pre-Unicode filespaces on the *SM server when a Unicode-enabled client is first used. The filespace is renamed by adding "_OLD" to the end of its name. Syntax: AUTOFsrename Prompt | Yes | No AUTOLabel Parameter of DEFine LIBRary, as of TSM 5.2, to specify whether the server attempts to automatically label tape volumes for SCSI libraries. See also: DEFine LIBRary; dsmlabel; LABEl LIBVolume Autoloader A very small robotic "library", housing a small number of cartridge tape storage cells (typically, 8 or fewer) and a single drive. The name serves to distinguish such a device from a true library. If a bar code reader is available, it is usually optional. The design of such a device takes many forms. With the floor-standing 3480 tape drive, an autoloader was an outboard attachment with a small elevator which positioned a cartridge position to the drive mouth. The early 3581 Ultrium Tape Autoloader had a set of 8, fixed storage cells and a small picker/elevator which pulled the cartridge out of the back of the cell and into a drive installed in the rear of the box. The later 3581 Ultrium Tape Autoloader went for a rack-mount, rather than boxy design, and in its flat layout had eight cells mounted on a racetrack, circling a single drive. Automatic Cartridge Facility 3590 tape drive: a magazine which can hold 10 cartridges. Automatic migration (HSM) The process HSM uses to automatically move files from a local file system to TSM storage based on options and settings chosen by a root user on your workstation. This process is controlled by the space monitor daemon (dsmmonitord). Is governed by the "SPACEMGTECH=AUTOmatic|SELective|NONE" operand of MGmtclass. See also: Threshold migration; Demand migration; dsmautomig Automatic reconciliation The process HSM uses to reconcile your file systems at regular intervals set by a root user on your workstation. This process is controlled by the space monitor daemon (dsmmonitord). See: Reconciliation; RECOncileinterval AUTOMIGNOnuse Mgmtclass parameter specifying the number of days which must elapse since the file was last accessed before it is eligible for automatic migration. Default: 0 meaning that the file is immediately available for migration. Query: 'Query MGmtclass' and look for "Auto-Migrate on Non-Use". Beware setting this value higher than one or two days: if all the files are accessed, the migration threshold may be exceeded and yet no migration can occur; hence, a thrashing situation. See also: Auto Migrate on Non-Usage AUTOMount Client option to tell TSM that the file system is automounted, so that it will both cause it to be mounted for the backup, and to assure that it stays mounted for the duration of the session. Works in conjunction with the DOMain statement, or file systems specified as backup objects. Is not needed if DOMain specifies all-auto-nfs or all-auto-lofs due to the nature of those specifications. Availability Element of 'Query STatus', specifying whether the server is enabled or disabled; that is, it will be "Disabled" if 'DISAble SESSions' had been done prior to TSM 5.3, or 'DISAble SESSions ALL' had been done in TSM 5.3+; else will show simply "Enabled" where all access is enabled, or "Enabled for Client sessions" if only 'ENable SESSions CLIent' is in effect. Note that you can also see this value via: select AVAILABILITY from status. Average file size: ADSMv2: In the summary statistics from an Archive or Backup operation, is the average size of the files processed. Note that this value is the true average, and is not the "Total number of bytes transferred" divided by "Total number of objects backed up" because the "transferred" number is often inflated by retries and the like. See also: Total number of bytes transferred AVG SQL statement to yield the average of all the rows of a given numeric column. See also: COUNT; MAX; MIN; SUM B Unit declarator signifying Bytes. Example: "Page size = 4 KB" b Unit declarator signifying bits. Example: "Transmit at 56 Kb/sec" B/A or BA Abbreviation for Backup/Archive, as when referring to the B/A Client manual. See: Backup/Archive client BAC Informal acronym for the Backup/Archive Client. BAC Binary Arithmetic Compression: algorithm used in the IBM 3480 and 3490 tape system's IDRC for hardware compression the data written to tape. See also: 3590 compression of data Back up only data less than a year old Some sites charge users for TSM server storage space, and their users then try to go cheap, as in seeking to have only newer data (less than a year old) backed up. In Unix you could readily do for the home directory this via: find ~ -mtime -365 -print > /tmp/files_list dsmc i -FILEList=/tmp/files_list (Resulting names containing spaces would require quoting; but that could be readily added via an appropriate command pipe-inserted between the 'find' and the redirect.) Something similar could be done in other environments. Another option for the user is to have the TSM client compress the data being sent to the TSM server. Yet another approach, even much simpler from the standpoint of the backup, is for the user to move his older, less relevant data into an oldies folder and Exclude that from backup. It's a common practice to move old data into a "back room" folder anyway. You could make it a site standard that user folders with a special oldies name would not be backed up, which would take care of things for this and similarly cheap users. Back up some files once a week See IBM Technote 1049445, "How to backup only some files once a week". Back up storage pool See: BAckup STGpool BackActiveQryResp TSM client-server transaction verb seen when an Incremental backup is starting for a file system or directory, when the client is asking the TSM server for the list of Active files for that area. Such a transaction may take a considerable amount of time for a large area. BACKDELete A Yes/No parameter on the 'REGister Node' and 'UPDate Node' commands to specify whether the client node can delete its own backup files from the server, as part of a dsmc Delete Filespace. Default: No. Its value can be seen in the TSM server command 'Query Node' and the client command 'dsmc Query SEssion'. The value must be Yes for Oracle/RMAN to delete old backups. See also: ARCHDELete Backed-up files, list from client 'dsmc Query Backup "*" -FROMDate=xxx -NODename=xxx -PASsword=xxx' Backed-up files, list from server You can do a Select on the Backups or Contents table for the filespace; but there's a lot of overhead in the query. A lower overhead method, assuming that the client data is Collocated, is to do a Query CONTent on the volume it was more recently using (Activity Log, SHow VOLUMEUSAGE). A negative COUnt value will report the most recent files first, from the end of the volume. Backed-up files count (HSM) In dsmreconcile log. Backed-up files missing You believe that certain files should have been backed up from a client file system, but when you later perform a query or attempt a restoral, they aren't evident. Several possibilities: - They were excluded from backup, by filename, directory, or file system. - The type of backup being done on your system is -INCRBYDate and the file was introduced to the file system via a method which preserved its old datestamp. - They cannot be seen because the directory containing them expired from TSM storage and the GUI you are using cannot show them because the directory is missing. (IBM Technote 1162784) - The files went Inactive and were expired some time ago. - The files are quite busy and cannot get backed up with your prevailing client CHAngingretries value. - Someone has been playing games, moving the subject files around in the file system over time. Don't use a GUI to check for such files: use the CLI, as in 'dsmc query backup -inactive FileName'. See also: Directories missing in restore Backhitch Relatively obscurant term used to describe the start/stop repositioning that some tape drives have to perform after writing stops, in order to recommence writing the next burst of data adjoining the last burst. This is time-consuming and prolongs the backup of small files, and certainly reduces restoral performance, as TSM has to reposition within the tape to get to each file requested in the restoral. Less expensive tape technologies (DLT, LTO) are known for this, as their less powerful motors cannot accelerate and decelerate tape as fast as can premium drives such as the 359x series. The backhitch effect is sometimes called "shoe-shining", referring to the reciprocating motion. The impact on the medium can be severe, as there is much more wear, resulting in a shorter lifetime. (There's no good or consistent way to capture statistics on backhitch events.) Redbook "IBM TotalStorage Tape Selection and Differentiation Guide" notes that LTO is 5x slower than 3590H in its backhitch; and "In a non-data streaming environment, the excellent tape start/stop and backhitch properties of the 3590 class provides much better performance than LTO." Some vendor drives deal with this problem via Speed Matching, reducing the write speed to *try* to match the incoming data rate - but drives can reliably reduce speed only to about 50%...which helps up to a point, but cannot compensate for significant periods where no data comes in. The best way to avoid this is to first accumulate all the data that is to be written to the tape, before initiating the writing, where this can be accomplished via a capacious disk storage pool as the arrival point for client-sent data. See Tivoli whitepaper "IBM LTO Ultrium Performance Considerations" http://en.wikipedia.org/wiki/ Magnetic_tape_data_storage See also: DLT and start/stop operations; LTO performance; "shoe-shining"; Start-stop; Streaming Backint SAP client; uses the TSM API and performs TSM Archiving rather than Backup. Msgs prefix: BKI See also: TDP for R/3 BackQryRespEnhanced3 TSM client-server transaction verb for the client to ask for a list of Backup files from the server, sometimes with File Grouping. In 'SELECT * FROM SESSIONS', when you see this as the LAST_VERB, the BYTES_SENT value will grow as the BYTES_RECEIVED is unchanged on a Producer session. I believe that this is the result of using MEMORYEFficientbackup Yes. Ref: API manual BACKRETention Parameter of 'DEFine DOmain' to specify the retention grace period for the policy domain, to protect old versions from deletion when the respective Copy Group is not available. You should, however, have a Copy Group to formally establish your retention periods: do 'Query COpygroup' to check. Specify as the number of days (from date of deactivation) to retain backup versions that are no longer on the client's system. Backup The process of copying one or more files, directories, and other file system objects to a server backup type storage pool to protect against data loss. During a Backup, the server is responsible for evaluating versions-based retention rules, to mark the oldest Inactive file as expired if the new incoming version causes the oldest Inactive version to be "pushed out" of the set. (See: "Versions-based file expiration") ADSMv2 did not back up special files: character, block, FIFO (named pipes), or sockets). ADSMv3+ *will* back up some special files: character, block, FIFO (named pipes); but ADSMv3 will *not* back up or restore sockets (see "Sockets and Backup/Restore"). More trivially, the "." file in the highest level directory on Unix systems is not backed up, which is why "objects backed up" is one less than "objects inspected".) Backups types: - Incremental: new or changed files; Can be one of: - Full: All new and changed files in the file system are backed up, and takes care of deleted files; - Partial: Same effect as Full, but is limited to the part of the file system specified on the command line. - INCRBYDate: Simply looks for files new or changed since last backup date via examination of timestamps, so omits old-dated files new to client, and deleted files are not expired. Via 'dsmc Incremental'. (Note that the file will be physically backed up again only if TSM deems the content of the file to have been changed: if Unix and only the attributes (e.g., permissions) have been changed, then TSM will simply update the attributes of the object on the server.) - Selective: you select the files. Via 'dsmc Selective'. Priority: Lower than BAckup DB and Restore. See "Preemption of Client or Server Operations" in the Admin Guide. Full incrementals are the norm, as started by 'dsmc incremental /FSName'. Use an Include-Exclude Options File if you need to limit inclusion. Use a Virtual Mount Point to start at other than the top of a file system. Use the DOMain Client User Options File option to define default filesystems to be backed up. (Incremental backup will back up empty directories. Do 'dsmc Query Backup * -dirs -sub=yes' the client to find the empties, or choose Directory Tree under 'dsm'.) To effect backup, TSM examines the file's attributes such as size, modification date and time (Unix mtime), ownership (Unix UID), group (Unix GID, (Unix) file permissions, ACL, special opsys markers such as NTFS file security descriptors, and compares it to those attributes of the most recent backup version of that file. (Unix atime - access time - is ignored.) Ref: B/A Client manual, "Backing Up and Restoring Files" chapter, "Backup: Related Topics", "What Does TSM Consider a Changed File"; and under the description of Copy Mode. This means that for normal incremental backups, TSM has to query the database for each file being backed up in order to determine whether that file is a candidate for incremental backup. This adds some overhead to the backup process. TSM tries to be generic where it can, and in Unix does not record the inode number. Thus, if a 'cp -p' or 'mv' is done such that the file is replaced (its inode number changes) but only the ctime attribute is different, then the file data will not be backed up in the next incremental backup: the TSM client will just send the new ctime value for updating in the TSM database. Backup changes the file's access timestamp (Unix stat struct st_atime): the time of last "access" or "reference", as seen via Unix 'ls -alu ...' command. The NT client uses the FILE_FLAG_BACKUP_SEMANTICS option when a file is opened, to prevent updating the Access time. See also: Directories and Backup; -INCRBYDate; SLOWINCREMENTAL; Updating--> Contrast with Restore. For a technique on backing up a large number of individual files, see entry "Archived files, delete from client". Backup, always See: Backup, full (force) Backup, batched transaction buffering See: TXNBytelimit Backup, delete all copies Currently the only way to purge all copies of a single file on the server is to setup a new Management Class which keeps 0 versions of the file. Run an incremental while the files is still on the local FS and specify this new MC on an Include statement for that file. Next change the Include/Exclude so the file now is excluded. The next incremental will expire the file under the new policy which will keep 0 inactive versions of the file. Backup, delete part of it ADSM doesn't provide a means for server commands to delete part of a backup; but you can effect it by emplacing an Exclude for the object to be deleted: the next backup will render it obsolete in the backups. Backup, exclude files Specify "EXclude" in the Include-exclude options file entry to exclude a file or group of files from ADSM backup services. (Directories are never excluded from backups.) Backup, full (force) You can get a full backup of a file system via one of the following methods (being careful to weigh the ramifications of each approach): - Do a Selective Backup; like 'dsmc s -su=y FSname' in Unix. (In the NT GUI, next to the Help button there is a pull down menu: choose option "always backup".) - In the server, do 'UPDate COpygroup ... MODE=ABSolute' in the associated Management Class, which causes files to be backed up regardless of having been modified. (You will have to do a 'VALidate POlicyset' and 'ACTivate POlicyset' to put the change into effect.) Don't forget to change back when the backup is done. - Consider GENerate BACKUPSET (q.v.), which creates a package of the file system's current Active backup files. See: Backup Set; dsmc REStore BACKUPSET; Query BACKUPSETContents - At PC client: relabel the drive and do a backup. At Unix client: mount the file system read-only at a different mount point and do a backup. - As server admin, do 'REName FIlespace' to cause the filespace to be fully repopulated in the next backup (hence a full backup): you could then rename this just-in filespace to some special name and rename the original back into place. - Define a variant node name which would be associated with a management class with the desired retention policy, code an alternate server stanza in the Client System Options file, and select it via the -SErvername command line option. Backup, full, occurring mysteriously This tends to be seen on Windows systems, and if not due to factors listed above, then can be due to a Windows administrator running wild, performing mass permissions changes in the file system. Backup, full, periodic (weekly, etc.) Some sites have backup requirements which do not mesh with TSM's "incremental forever" philosophy. For example, they want to perform incrementals daily, and fulls weekly and monthly. For guidance, see IBM site Solution 1083039, "Performing Full Client Backups with TSM". See also: Split retentions Backup, last (most recent) Determine the date of last backup via: Client command: 'dsmc Query Filespace' Server commands: 'Query FIlespace [NodeName] [FilespaceName] Format=Detailed' SELECT * FROM FILESPACES WHERE - NODE_NAME='UPPER_CASE_NAME' and look at BACKUP_START, BACKUP_END Select: Backup, management class used Shows up in 'Query Backup', whether via command line or GUI. Backup, more data than expected going If you perform a backup and expect like 5 GB of data to go and instead find much more, it's usually a symptom of retries, as in files being open and changing during the backup. Backup, OS/2 OS/2 files have an archive byte (-a or +a). Some say that if this changes, ADSM will back up such files; but others say that ADSM uses the filesize-filedate-filetime combination. Backup, preview See the PREview command in TSM 5.3+. Backup, prohibit See: Backups, prevent Backup, selective A function that allows users to back up objects from a client domain that are not excluded in the include-exclude list and that meet the requirement for serialization in the backup copy group of the management class assigned to each object. Performed via the 'dsmc Selective' cmd. See: Selective Backup. Backup, space used by clients (nodes) 'Query AUDITOccupancy [NodeName(s)] on all volumes [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: You need to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' for the reported information to be current. Backup, subfile See: Adaptive Differencing; Set SUBFILE; SUBFILE* Backup, successful? The Query EVent command can tell you this, where TSM client scheduling is employed (as opposed to operating system scheduling). When reviewing the output of the command, be sure to inspect the Actual Start column as well as the Status: it's easy to gloss over a status of Completed and not also look at the timestamp of when the schedule last ran, and thus not perceive a problem where a scheduler process is hung (ANR2576W). Consider something like the following to report on errors, to be run via schedule: /* FILESERVER BACKUP EXCEPTIONS */ Query EVent DomainName SchedName BEGINDate=TODAY-1 ENDDate=TODAY-1 EXceptionsonly=YES Format=Detailed >> /var/log/backup-problems File will end up with message: "ANR2034E QUERY EVENT: No match found for this query." if no problems (no exceptions found). Backup, undo There is no way to undo standard client Incremental or Selective backups. Backup, volumes used in There is no real way to determine what volumes were associated with a specific backup. The Volumeusage table is no good because it contains all primary and copy storage pool tapes associated with the node, regardless of date. You could bracket a time period in the Activity Log and try to discern tape usage by virtue of mounts, but that's messy, and doesn't account for volumes which happened to already be mounted (residual tape - or disk, for that matter). You could try doing a Select from the Backups table, again trying to isolate by time period, but that's expensive to run and doesn't necessarily correlate to a session. Consider also that, at any time thereafter, the objects may move to a different volume. In any case, the nature of an Enterprise level product like TSM is that you should not need to know this info, as is the case in many virtualization technologies today: the managing subsystem takes care of data provisioning, and "you don't need to know". Backup, which file systems to back up Specify a file system name via the "DOMain option" (q.v.) or specify a file system subdirectory via the "VIRTUALMountpoint" option (q.v.) and then code it like a file system in the "DOMain option" (q.v.). Backup, which files are backed up See the client manual; search the PDF (Backup criteria) for the word "modified". In the Windows client manual, see: - "Understanding which files are backed up" - "Copy mode" - "Resetarchiveattribute" (TSM does not use the Windows archive attribute to determine if a file is a candidate for incremental backup.) - And, Windows Journal-based backup. It is also the case that TSM respects the entries in Windows Registry subkey HKLM\System\CurrentControlSet\Control\ BackupRestore\FilesNotToBackup (No, this is not mentioned in the client manual; is in the 4.2 Technical Guide redbook. File \Pagefile.sys should be in this list.) Always do 'dsmc q inclexcl' in Windows to see the realities of inclusion. Note that there is also a list of Registry keys not to be restored, in KeysNotToRestore. Unix: See the criteria listed under the description of "Copy mode" (p.128 of the 5.2 manual). See also: FIOATTRIBS trace flag; MODE Backup always See: Backup, full (force) Backup Central W. Curtis Preston's commercial website about backup products and technolgies, which you will find is very much about him. He took the unilateral action of attaching the non-commercial ADSM-L mailing list community to his site, as an apparent empire-building technique, annexing the volunteer expertise of ADSM-L members to further his site. Over time we have found that the questions coming in from Backup Central members reflect great lack of knowledge, and disinterest in referring to documentation to seriously make use of this enterprise level product, resulting in much wasted time for ADSM-L experts. Backup copies, number of Defined in Backup Copy Group. Backup Copy Group A policy object that contains attributes which control the generation, destination, and expiration of backup versions of files. A backup copy group belongs to a management class. Backup Copy Group, define 'DEFine COpygroup DomainName PolicySet MGmtclass [Type=Backup] DESTination=Pool_Name [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' Backup Copy Group, update 'UPDate COpygroup DomainName PolicySet MGmtclass [Type=Backup] [DESTination=Pool_Name] [FREQuency=Ndays] [VERExists=N_Versions|NOLimit] [VERDeleted=N_Versions|NOLimit] [RETExtra=N_Versions|NOLimit] [RETOnly=N_Versions|NOLimit] [MODE=MODified|ABSolute] [SERialization=SHRSTatic|STatic| SHRDYnamic|DYnamic]' BAckup DB TSM server command to back up the TSM database to tape (backs up only used pages, not the whole physical space). It does not also write the Recovery Log contents to that output tape. Will write to multiple volumes, if necessary. This operation is essential when LOGMode Rollforward is in effect, as this is the only way that the Recovery Log is cleared. It's unclear whether this operation copies the current dbvolume configuration to the output volume; but that doesn't matter, in that the 'dsmserv restore db' operation requires that a TSM server already be installed with a formatted db and recovery log, where that space will be used as the destination of the restored data. Syntax: 'BAckup DB DEVclass=DevclassName [Type=Incremental| Full|DBSnapshot] [VOLumenames=VolNames| FILE:File_Name] [Scratch=Yes|No] [Wait=No|Yes]' The VOLumenames list will be used if there is at least one volume in it which is not already occupied; else TSM will use a scratch tape per the default Scratch=Yes. (The list may contain volumes which are currently occupied with data: TSM will realize this and skip them.) Each BAckup DB employs a new volume - no exceptions. (See IBM Technote 1153782.) That is, you cannot append your next backup to the end of the tape used for the previous backup. This one-backup- per-volume requirement is necessary principally because the volumes would be used by relatively basic restoral utilities, where tape positioning would be an undue complication. This approach also avoids reclamation issues, and facilitates securing this vital backup in a vault right after creation, then leaving it there, rather than retrieving it for appending a further db backup. And, Incremental DB Backup does *not* append its backup to the end of the last tape used in a full backup: each incremental writes to a scratch tape. Where a library is small, this tape requirement can outstrip scratch capacity: consider using FILE volumes instead. (You could even have SATA volume running inside a safe to hold such backups!) DBSnapshot Specifies that you want to run a full snapshot database backup, to make a "point in time" image for possible later db restoral (in which the Recovery Log will *not* participate). The entire contents of a database are copied and a new snapshot database backup is created without interrupting the existing full and incremental backup series for the database. If roll-forward db mode is in effect, and a snapshot is performed, the recovery log is *not* cleared (so can continue to grow). Before doing one of these, be aware that the latest snapshot db backup cannot be deleted! (See Technote 1083952.) A snapshot is most commonly done where you are sending your in-band dbbackups offsite, but also want to have a copy onsite. Priority: Higher than filespace Backup and tape Reclamation, so will preempt one of those if it needs a drive and all are in use. The Recovery Log space represented in the backup will not be reclaimed until the backup finishes: the Pct Util does not decrease as the backup proceeds. The tape used *does* show up in a 'Query MOunts". Note that unlike in other TSM tape operations, the tape is immediately unloaded when the backup is complete: MOUNTRetention does not apply to Backup DB, as there is no possibility of further writing to the tape. If using scratch volumes, beware that this function will gradually consume all your scratch volumes unless you do periodic pruning ('DELete VOLHistory' or 'Set DRMDBBackupexpiredays'). If specifying volsers to use, they must *not* already be assigned to a DBBackup or storage pool: if they are, ADSM will instead try to use a scratch volume, unless Scratch=No. Example: 'BAckup DB DEVclass=LIBR.DEVC_3590 VOL=000050 Type=full Scratch=No' BAckup DB cannot proceed if a DELete DBVolume is in progress. Messages: ANR2280I/ANR2369I when the database backup starts; ANR1360I when output volume opened; ANR1361I when the volume is closed; ANR4554I tracks progress; ANR4550I at completion (reports number of pages backed up - but gives no indication how much of the volume was used). If you neglect to perform a BAckup DB for some time and a significant amount of database updating has occurred, you will be reminded of this by an ANR2121W message in the Activity Log. To get a summary of database backup start and end times over 7 days, do: Query ACtlog MSGno=2280 BEGINDate=-7 Query ACtlog MSGno=4550 BEGINDate=-7 After the ANR2369I message, it can take 10 minutes or more for the actual backup process to show up in Query PRocess - and then maybe 5 minute more to settle in and start the actual backup...so plan ahead if your Recovery Log is getting uncomfortably closer to 100% full. A maximum of 32 Incremental DB backups are supported: attempting a 33rd results in message ANR2361E. Advice: Do not have an appreciable number of Incrementals between Fulls: "tape is tape", and it takes just one defect to ruin the ability to recover your database to currency. Queries: Do either: 'Query VOLHistory Type=DBBackup' or 'Query LIBVolume' to reveal the database backup volume. (A 'Query Volume' is no help because it only reports storage pool volumes, and by their nature, database backup media are outside ADSM storage. See: Database backup volume, pruning. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). Attempting to start a BAckup DB when one is already in progress yields ANR2433E. See also: DELete VOLHistory; dsmserv RESTORE DB; Set DRMDBBackupexpiredays BAckup DB processing Initialization: When a BAckup DB starts, it first flushes "dirty" database buffer pool pages to disk. Obviously, the larger your buffer pool and the more active your server at the time, the longer the start-up delay will be. The backup process is sequential in accessing the volumes constituting the TSM database, where each volume is read from beginning to end, then going on to the next volume, until all have been read. The order of volume use is the order in which DEFine DBVolume occurred. The read size is 256 KB. Number of pages backed up: Will reflect the number of pages involved with committed data. If, in Rollforward mode, you start with the database holding 18M objects, and run expiration to reduce that to 17M, a BAckup DB initiated thereafter will back up 18M pages, not 17M. BAckup DB to a scratch 3590 tape Perform like the following example: in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590 Type=Full' BAckup DB to a specific 3590 tape Perform like the following example: in the 3494 'BAckup DB DEVclass=LIBR.DEVC_3590 Type=Full VOLumenames=000050 Scratch=No' BAckup DEVCONFig *SM server command to back up the device configuration information which *SM uses in standalone recoveries. Syntax: 'BAckup DEVCONFig [Filenames=___]' (No entry is written to the Activity Log to indicate that this was performed.) You should have DEVCONFig in your server options file to that this is done automatically. Backup done When a backup of a file system has completed, the following will be written to the dsmc output: Successful incremental backup of '/FS' where FS is the file system name. Backup file not showing in a query The question sometimes comes up that a file which the user believes is Active and should be in TSM storage is not showing up in a lookup from the client. The file may not be revealed for a number of reasons, including... - It had been deleted or renamed on the client, and expired in TSM. - It was excluded from backups. - You're using the GUI to seek the file and an intermediate directory expired from TSM storage some time ago. - The lookup is not being performed from the file owner or superuser or grantee via 'dsmc SET Access'. Backup failure message "ANS4638E Incremental backup of 'FileSystemName' finished with 2 failure" Backup files See also: File name uniqueness Backup files: deletable by client Controlled by the BACKDELete parameter node? on the 'REGister Node' and 'UPDate Node' commands. Default: No (which thus prohibits a "DELete FIlespace" operation from the client). Query via 'Query Node Format=Detailed'. Backup files, management class binding By design, you can not have different backup versions of the same file bound to different management classes. All backup versions of a given file are bound to the same management class. Backup files, delete *SM provides no inherent method to do this, but you can achieve it by the following paradigm: 1. Update Copygroup Verexists to 1, ACTivate POlicyset, do a fresh incremental backup. This gets rid of all but the last (active) version of a file. 2. Update Copygroup Retainonly and Retainextra to 0; ACTivate POlicyset; EXPIre Inventory. This gets ADSM to forget about inactive files. 3. If the files are "uniquely identified by the sub-directory structure above the files" add those dirs to the exclude list. Do an Incremental Backup. The files in the excluded dirs get marked inactive. The next EXPIre Inventory should then remove them from the tapes. See also: Database, delete table entry Backup files, list from server 'Query CONtent VolName ...' Backup files, retention period Is part of the Copy Group definition. Is defined in DEFine DOmain to provide a just-in-case default value. Note that there is one Copy Group in a Management Class for backup files, and one for archived files, so the retention period is essentially part of the Management Class. Backup files, versions 'SHOW Versions NodeName FileSpace' Backup files for a node, list from SELECT NODE_NAME, FILESPACE_NAME, - SERVER HL_NAME, LL_NAME, OWNER, STATE, - BACKUP_DATE, DEACTIVATE_DATE FROM - BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' (Be sure that node name is upper case.) See also: HL_NAME; LL_NAME Backup generations See: Backup version Backup Group New in TSM 5.2, the 'dsmc Backup GRoup' command allows treating an arbitrary list of files, possibly from multiple file systems, as a group, such that you can back them up together, where they end up in a virtual file space on the TSM server. This is helpful, for example, where an application installs and maintains related files in multiple areas of several file systems, and it's important to have these quickly restorable as a collective. A group backup looks like: dsmc backup group -FILEList=/tmp/flist -GROUPName=KrellApp -VIRTUALFSname=/KrellApp MODE=Full Management class control is via the common Include option, where its first value is the group name and the second is the management class. IBM Technotes: 1169276; 1228033 See also: dsmc Query GRoup Backup Group "simulation" If you have a TSM 5.1 system and need to do much like Backup Group does, here's an approach, using directory symlinks... In backups, TSM does not follow symbolic links to backup a file which is the target of a symlink; but if the symlink names a directory, you can go *through* the symlink to then back up the outlying contents. The filespace reflected in the TSM stgpool will be that which contains the symlink. By extension, you could use VIRTUALMountpoint to have the filespace adopt the name of the embarkation directory which identifies the collective. Backup Image See: dsmc Backup Image; dsmc RESTore GRoup Backup laptop computers One technique: Define a schedule for laptop users that spans a 24-hour window and have the scheduler service running, as SCHEDMODe POlling, starting at boot. This will cause the scheduler to try to contact the server every 20 minutes. When the laptop connects to the network, sometime within the next 20 minutes the scheduler will be able to contact the server, and if the schedule has not yet been executed, it will run. (This is preferable to invoking dsmc at boot time, as the schedule technique deals with the situation where users employ sleep mode a lot, rather than shutting down. There are, of course, competing products to back up mobile PCs, such as BrightStor ARCserve Backup for Laptops & Desktops. Backup locks? Does TSM obtain a lock on what it's backing up? No. However, in some levels of some operating systems, it is possible for the OS to implicitly lock the file in response to TSM's attempt to back it up. During backup, TSM may encounter files which were locked by other processes, in which case it will issue msg ANS4987E. It does lock volumes when performing image restorals. Backup log timestamps Client systems sometimes want to initiate backups from the client, at the time and context of the client's choosing. Whereas scheduled backups result in each line of the log being prefixed with a timestamp, this does not happen with command line incremental backups. (Neither running the command as a background process, nor redirecting the output will result in timestamping the lines; nor is there any disclosed testflag or option for generating the timestamp with a client-initiated backup.) One possible recourse is to use the server DEFine CLIENTAction command to perform the backup. The best method is to use a scheduled backup in conjunction with the PRESchedulecmd and POSTSchedulecmd options, where the former can do setup (including any timing waits), and the latter can be used to uniquely name the day's log, and possibly generate statistics from it. BAckup Node TSM server command to start a backup operation for a network-attached storage (NAS) node. See doc APAR IC42526 for info on differential mode backups. Return codes likely come from the datamover underlying the action, where reviewing its logging will help explain problem situations. See also: Query NASBAckup Backup not happening for some files See if your Copy Group FREQuency value is other than 0. Backup objects for day, query at server SELECT * FROM BACKUPS WHERE - NODE_NAME='UPPER_CASE_NAME' AND - FILESPACE_NAME='___' AND - DATE(BACKUP_DATE)='2000-01-14' Backup of HSM-managed files Use one server for HSM plus the Backup of that HSM area: this allows ADSM to effect the backup (of large files) by copying from one storage pool tape to another, without recalling the file to the host file system. In the typical backup of an HSM-managed file system, ADSM will back up all the files too small to be HSM-migrated (4095 bytes or less); and then any files which were in the disk level of the HSM storage pool hierarchy, in that they had not yet migrated down to the tape level; and then copy across tapes in the storage pool. If Backup gets hung up on a code defect while doing cross-tape backup, you can circumvent by doing a dsmrecall of the problem file(s). The backup will then occur from the file system copy. Be advised that cross-pool backup can sometimes require three drives, as files can span tapes. With only two drives, you can run into an "Insufficient mount points available" condition (ANR0535W, ANR0567). Backup Operation Element of report from 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' to identify the operation number for this volume within the backup series. Will be 0 for a full backup, 1 for first incremental backup, etc. See also: Backup Series Backup operation, retry when file in Have the CHAngingretries (q.v.) Client use System Options file (dsm.sys) option specify how many retries you want. In mail spool backups in particular, the epidemic of spam causes the spool files to be excessively busy, which aggravates attempts to back the files up. Default: 4. Backup or Restore NTFS Security A selection in the Windows B/A GUI, Information which will cause NTFS security attributes to be evaluated in assessing backup candidacy. The attributes are at the directory level, meaning that this selection controls whether Windows directory entries will be backed up. This then gets into factors discussed under DIRMc (q.v.). Backup performance Many factors can affect backup performance. Here are some things to look at: - Client system capability and load at the time of backup. - If Expiration is running on the server, performance is guaranteed to be impaired, due to the CPU and database load involved. - Use client compression judiciously. Be aware that COMPRESSAlways=No can cause the whole transaction and all the files involved within it to be processed again, without compression. This will show up in the "Objects compressed by:" backup statistics number being negative (like "-29%"). (To see how much compression is costing, compress a copy of a typical, large file that is involved in your backups, outside of TSM, performing the compression with a utility like gzip.) Beware that using client compression and sending that data to tape drives which also compress data can result in prolonged time at the tape drive as its algorithms struggle to find patterns in the patternless compressed data. - A file system that does compression (e.g., NTFS) will prolong the job. - APAR IC48891 describes a longstanding design deficiency which results in needless retries and accompanying communication link and TSM server over-burdening. Affects clients earlier than 5.3.5. See the description of client testflag ENABLEGETFILSIZE. - Using the MEMORYEFficientbackup option may reduce performance. - The client manual advises: "A very large include-exclude list may decrease backup performance. Use wildcards and eliminate unnecessary include statements to keep the list as short as possible." - Avoid using the unqualified Exclude option to exclude a file system or directory, as Exclude is for *files*: subdirectories will still be traversed and examined for candidates. Instead, use Exclude.FS or Exclude.Dir, as appropriate. - If your server Recovery Log is too full (80%+), the server will impose a delay on all transactions (msg ANR2997W), starting with 3 milliseconds and ramping up to 30 milliseconds or more. - Backing up a file system which is networked to this client system rather than native to it (e.g., NFS, AFS) will naturally be relatively slow. - Make sure that if you activated client tracing in the past that you did not leave it active, as its overhead will dramatically slow client performance. - File system topology: conventional directories with more than about 1000 files slow down all access, including TSM. (You can gauge this by doing a Unix 'find' command in large file systems and appreciate just how painful it is to have too many files in one directory.) - The design of the directory access methods in your operating system can make a huge difference. For example, switching from JFS to JFS2 in AIX can result in a big improvement in directory performance due to the more database-like data structure used. Consider inherent performance when choosing an OS or file system type to serve your organization's data. - Consider using MAXNUMMP to increase the number of drives you may simultaneously use, to the extent that unused drives are usually available. - Your Copy Group SERialization choice could be causing the backup of active files to be attempted multiple times. - May be waiting for mount points on the server. Do 'Query SEssion F=D'. - Examine the Backup log for things like a lot of retries on active files (which greatly waste time), and inspect the timestamp sequence for indications of problem areas in the file system. - If an Incremental backup is slow while a Selective or Incrbydate is fast, it can indicate a client with insufficient real memory or other processes consuming memory that the client needs to process an Active files list expeditiously. - If the client under-estimates the size of an object it is sending to the server, there may be performance degradation and/or the backup may fail. See IBM site Technote 1156827. - Is the file system disk drive ancient? A 5400 rpm disk with slow seek time cripples everything which tries to use it (not just backups). - Defragment your hard drive! You can regain a lot of performance. (This can also be achieved by performing a file-oriented copy of the file system to a fresh disk, which will also eliminate empty space in directories.) - If a Windows system, consider running DISKCLEAN on the filesystem. - Corrupted files or disk defects can make for seeminly inexplicable slowness in a few areas of the file system. - In a PC, routine periodic executions of a disk analyzer (e.g., CHKDSK, or more thorough commercial product) are vital to find drive problems which can impair performance. - If Windows backup, and you expect the backup data to rapidly go into a disk storage pool, but the backups are still slow, it may well be the case that files data is going into the disk storage pool, but directories are going to sluggish tape, per TSM management class policies for directories (see DIRMc). - In Windows backups, where there are a lot of directories being backed up as well as files, and the two are going to different management classes, this can result in an excessive number of EndTxn verbs for TSM server database commits, which slows down processing. (See IBM Technote 1247892.) - Do your schedule log, dsmerror log, or server Activity Log show errors or contention affecting progress? - TSM Journaling may help a lot. - The use of CRC (VALIdateprotocol et al) adversely affects performance as the CRC values have to be computed on one end and examined on the other. - The number of versions of files that you keep, per your Backup Copy Group, entails overhead: During a Backup, the server has additional work to do in having to check retention policies for this next version of a file causing the oldest one in the storage pool having to be marked for expiration. See also: DEACTIVATE_DATE - If AIX, consider using the TCPNodelay client option to send small transactions right away, before filling the TCP/IP buffer. - If running on a PC, disable anti-virus and other software which adds overhead to file access. One customer reported F-Secure Anti-Virus causing major delays on his Windows system. (In 2009, Symantec Endpoint Protection is a known performance drag for TSM clients.) - Backups of very large data masses, such as databases, benefit from going directly to tape, where streaming can often be faster than first going to disk, with its rotational positioning issues. And speed will be further increased by hardware data compression in the drive. - If backups first go to a disk storage pool, consider making it RAID type, to benefit from parallel striping across multiple, separate channels & disk drives. But avoid RAID 5, which is poor at sequential writing. - Make sure your server BUFPoolsize is sufficient to cache some 99% of requests (do 'q db f=d'), else server performance plummets. - Maximize your TXNBytelimit and TXNGroupmax definitions to make the most efficient use of network bandwidth. - Use checksum offload with modern ethernet adapters, to let the adapter perform (TCP) packet checksum validity, thus offloading the computer from that task. - Balance access of multiple clients to one server and carefully schedule server admin tasks to avoid waiting for tape mounts, migration, expirations, and the like. Migration in particular should be avoided during backups: see IBM site Technote 1110026. - Make sure that LARGECOMmbuffers Yes is in effect in your <5.3 client (the default is No, except for AIX). For TSM5.3+, use DISKBuffsize. - The client RESOURceutilization option can be used to boost the number of sessions. - If server and client are in the same system, use Shared Memory in Unix and Named Pipes in Windows. - If client accesses server across network: - Examine TCP/IP tuning values and see if other unusual activity is congesting the network. - Make sure that the most efficient traffic routing is being used. A TSM server may be multi-homed, and if your TCPServeraddress client option could be specifying an IP address on the same subnet but instead is specifying an address on a different subnet, then throughput is being needlessly impaired. And if your TSM server is not multi- homed to subnets where TSM clients are major users, consider adding such ethernet ports to your server. - See if your client TCPWindowsize is too small - but don't increase it beyond a recommendes size. (63 is good for Windows.) - If using 10 or 100 Mb ethernet (particularly 100 Mb), make sure your adapter cards are not set for Auto Negotiation. In particular, if there is a full vs. half duplex mismatch, you will definitely see terrible throughput. (Gigabit ethernet seems to require autonegotiation.) See the topic "NETWORK PERFORMANCE (ETHERNET PERFORMANCE)" near the bottom of this document. - Beware the invisible: networking administrators may have changed the "quality of service" rating - perhaps per your predecessor - so that *SM traffic has reduced priority on that network link. - If it is a large file system and the directories are reasonably balanced, consider using VIRTUALMountpoint definitions to allow backing up the file system in parallel. - A normal incremental backup on a very large file system will cause the *SM client to allocate large amounts of memory for file tables, which can cause the client system to page heavily. Make sure the system has enough real memory, and that other work running on that system at the same time is not causing contention for memory. Consider doing Incrbydate backups, which don't use file tables, or perhaps "Fast Incrementals". - Consider it time to split that file system into two or more file systems which are more manageable. - Look for misconfigured network equipment (adapters, switches, etc.). - Are you using ethernet to transfer large volumes of data? Consider that ethernet's standard MTU size is tiny, fine for messaging but not well suited to large volumes of data, making for a lot of processor and transmission overhead in transferring the data in numerous tiny packets. Consider the Jumbo Frame capability in some incarnations of gigabit ethernet, or a transmission technology like fibre channel, which is designed for volume data transfers. That is, ethernet's capacity does not scale in proportion to its speed increase. - If warranted, put your *SM traffic onto a private network (like a SAN does) to avoid competing with other traffic on a LAN in getting your data through. - If you have multiple tape drives on one SCSI chain, consider dedicating one host adapter card to each drive in order to maximize performance. - If you mix SCSI device types on a single SCSI chain, you may be limiting your fastest device to the speed of the slowest device. For example, putting a single-ended device on a SCSI chain with a differential device will cause the chain speed to drop to that of the single-ended device. - If your computer system has only one bus, it could be constrained. (RS/6000 systems can have multiple, independent buses, which distribute I/O.) - Tape drive technologies which don't handle start-stop well (e.g., DLT) will prolong backups. See: Backhitch - Automatic tape drive cleaning and retries on a dirty drive will slow down the action. - Tapes whose media is marginal may be tough for the tape drive to write, and the drive may linger on a tape block for some time, laboring until it sucessfully writes it - and may not give any indication to the operating system that it had to undertake this extra effort and time. (As an example, with a watchable task: Via 'Query Process' I once observed a Backup Stgpool taking about four times as long as it should in writing a 3590 tape, the Files count repeatedly remaining contant over 20 seconds as it struggled to write modest-sized files.) - On a Unix TSM server, do 'netstat' (on AIX: netstat | head -2 ; netstat | grep '\.1500 '): if the Recv-Q shows a sustained high value for a given TSM client, it indicates a substantial flow rate disparity between how fast the client is sending versus the media write speed, which can be an indication of media problems. Do Query Volume F=D for the session volume and check the last write date for the volume, which may be some time ago. Note further that TCP receive buffers tied up holding pending data for one incoming TSM client can result in a shortage of buffers for other clients trying to perform backups: problem media can have reaching effects. - In Unix, use the public domain 'lsof' command to see what the client process is currently working on. (The AIX5 'procfiles' command is similar.) - The operating system may have I/O performance deficiencies which patches, a new release, or a new version may remedy. - In Solaris, consider utilizing "forcedirectio" (q.v.). To analyze performance, use the 'truss' command to see where the client is processing. - Is cyclic redundancy checking enabled for the server/client (*SM 5.1)? This entails considerable overhead. - Exchange 2000: Consider un-checking the option "Zero Out Deleted Database Pages" (required restart of the Exchange Services). See IBM article ID# 1144592 titled "Data Protection for Exchange On-line Backup Performance is Slow" and Microsoft KB 815068. - A Windows TSM server may be I/O impaired due to its SCSI or Fibre Channel block size. See IBM site Technote 1167281. - If your Recovery Log does not have adequate capacity, session performance will be degraded, per msg "ANR2997W The server log is 81 percent full. The server will delay transactions by 3 milliseconds." (and 30 milliseconds when 90% full), the purpose being to give a Backup DB operation a better chance to finish expeditiously. - In a TSM 5.5 client, you may be using SSL-based encryption, where certificates are employed, and certificate validation may try to contact crl.verisign.net to check the Certificate Revocation List there, which may involve prolonged timeouts as the backup starts. You can verify this through packet monitoring. If none of the above pan out, consider rerunning the problem backup with client tracing active. See CLIENT TRACING near the bottom of this document. Ref: TSM Performance Tuning Guide See also: Backup taking too long; Client performance factors; Server performance; Tape: Why does it take so long...? Backup performance with 3590 tapes Writing directly to 3590 tapes, rather than have an intermediate disk, is 3X-4X faster: 3590's stream the data where disks can't. Ref: ADSM Version 2 Release 1.5 Performance Evaluation Report. Backup preview TSM 5.3 introduced the ability to preview the files which would be sent to the server in a Backup operation, per the client Include-Exclude specs, via the client PREview command. Related: Restoral preview Backup progress The following message appears after every 500 files: ANS1898I ***** Processed 1,000 files ***** If you need to somehow determine percent completion, in Unix you can do 'tail -f Backup_Log | grep ANS1898I' and compare the progress count with the sum of the in-use inodes for the file systems being backed up ('df' output), from which you can display a % value. BACKup REgistry During Incremental backup of a Windows system, the Registry area is backed up. However, in cases where you want to back up the Resistry alone, you can do so with the BACKup REgistry command. The command backs up Registry hives listed in Registry key HKEY_LOCAL_MACHINEM\System\ CurrentControlSet\Control\Hivelist Syntax: BACKup REgistry Note that there in current clients, there are no operands, to guarantee system consistency. Earlier clients had modifying parameters: BACKup REgistry ENTIRE Backs up both the Machine and User hives. BACKup REgistry MACHINE Backs up the Machine root key hives (registry subkeys). BACKup REgistry USER Backs up User root key hives (registry subkeys). See also: BACKUPRegistry Backup Required Before Migration In output of 'dsmmigquery -M -D', an (HSM) attribute of the management class which determines whether it is necessary for a backup copy (Backup/Restore) of the file to exist before it can be migrated by HSM. Defined via MIGREQUIRESBkup in management class. See: MIGREQUIRESBkup Backup retention grace period The number of days ADSM retains a backup version when the server is unable to rebind the object to an appropriate management class. Defined via the BACKRETention parameter of 'DEFine DOmain'. Backup retention grace period, query 'Query DOmain Format=Detailed', see "Backup Retention (Grace Period)". Backup Series Element of report from 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' to identify the TSM database backup series of which the volume is a part. Each backup series consists of a full backup and all incremental backups (up to a maximum of 32) that apply to that full backup, up to the next full backup of the TSM database. A DBsnapshot is an out-of-band db backup, which does not participate in a full+incremental Backup Series (and cannot be used with incremental db backups during a database restoral). DBsnapshots constitute have their own number series, independent of the numbering of the full+incremental series. Note: After a DSMSERV LOADDB (as in a database reload), the Backup Series number will revert to 1...which can royally confuse TSM and cause it to refuse to honor a DELete VOLHistory on the last volume of the previous series. When doing DELete VOLHistory, be sure to delete the whole series at once, to avoid the ANR8448E problem. To report the whole series: SELECT * FROM VOLHISTORY WHERE TYPE='BACKUPFULL' OR TYPE='BACKUPINCR' In the VOLHISTORY table, the BACKUP_SERIES field corresponds to the Backup Series name. See also: BAckup VOLHistory Backup sessions, multiple See: RESOURceutilization Backup Set TSM 3.7+ facility to create a collection of a client node's current Active backup files as a single point-in-time amalgam (snapshot) on sequential media, to be stored and managed as a single object in a format tailored to and restorable on the client system whose data is therein represented. The GENerate BACKUPSET server command is used to create the set, intended to be written to sequential media, typically of a type which can be read either on the server or client such that the client can perform a 'dsmc REStore BACKUPSET' either through the TSM server or by directly reading the media from the client node. The media is often something like a CD-ROM, JAZ, or ZIP. Note that you cannot write more than one Backup Sets to a given volume. If this is a concern, look into server-to-server virtual volumes. (See: Virtual Volumes) Also known by the misleading name "Instant Archive". Note that the retention period can be specified when the backup set is created: it is not governed by a management class. Also termed "LAN-free Restore". The consolidated, contiguous nature of the set speeds restoral. ("Speeds" may be an exaggeration: while Backup Sets are generated via TSM db lookups, they are restored via lookups in the sequential media in which the Backup Set is contained, which can be slow.) Backup Sets are frozen, point-in-time snapshots: they are in no way incremental, and nothing can be added to one. But there are several downsides to this approach: The first is that it is expensive to create the Backup Set, in in terms of time, media, and mounts. Second, the set is really "outside" of the normal TSM paradigm, further evidenced by the awkwardness of later trying to determine the contents of the set, given that its inventory is not tracked in the TSM database (which would represent too much overhead). You will not see a directory structure for a backupset. Note that you can create the Backup Set on the server as devtype File and then FTP the result to the client, as perhaps to burn a CD - but be sure to perform the FTP in binary mode! Backup Sets are not a DR substitute for copy storage pools in that Backup Sets hold only Active files, whereas copy storage pools hold all files, Active and Inactive. There is no support in the TSM API for the backup set format. Further, Backup Sets are unsuitable for API-stored objects (TDP backups, etc.) in that the client APIs are not programmed to later deal with Backup Sets, and so cannot perform client-based restores with them. Likewise, the standard Backup/Archive clients do not handle API-generated data. See: Backup Set; GENerate BACKUPSET; dsmc Query BACKUPSET; dsmc REStore BACKUPSET; Query BACKUPSET; Query BACKUPSETContents Ref: TSM 3.7 Technical Guide redbook Backup Set, amount of data Normal Backup Set queries report the number of files, but not the amount of data. You can determine the latter by realizing that a Backup Set consists of all the Active files in a file system, and that is equivalent to the file system size and percent utilized as recorded at last backup, reportable via Query FIlespace. Backup Set, list contents Client: 'Query BACKUPSET' Server: 'Query BACKUPSETContents' See also: dsmc Query BACKUPSET Backup set, on CD In writing Backup Sets to CDs you need to account for the amount of data exceeding the capacity of a CD... Define a devclass of type FILE and set the MAXCAPacity to under the size of the CD capacity. This will cause the data to span TSM volumes (FILEs), resulting in each volume being on a separate CD. Be mindful of the requirement: The label on the media must meet the following restrictions: - No more than 11 characters - Same name for file name and volume label. This might not be problem for local backupset restores but is mandatory for server backupsets over devclass with type REMOVABLEFILE. The creation utility DirectCD creates random CD volume label beginning with creation date, which will will not match TSM volume label. Ref: Admin Ref; Admin Guide "Generating Client Backup Sets on the Server" & "Configuring Removable File Devices" Backup set, remove from Volhistory A backup set which expires through normal retention processing may leave the volume in the volhistory. There is an undocumented form of DELete VOLHistory to get it out of there: 'DELete VOLHistory TODate=TODAY [TOTime=hh:mm:ss] TYPE=BACKUPSET VOLume=______ [FORCE=YES]' Note that VOLume may be case-sensitive. Backup set, rename? Backup sets cannot be renamed. Backup Set and CLI vs. GUI In the beginning (early 2001), only the CLI could deal with Backup Sets. The GUI was later given that capability. However: The GUI can be used only to restore an entire backup set. The CLI is more flexible, and can be used to restore an entire backup set or individual files within a backup set. Backup Set and TDP The TDPs do not support backup sets - because they use the TSM client API, which does not support Backup Sets. Backup Set and the client API The TSM client API does not support Backup Sets. Backup Set restoral performance Some specific considerations: - A Backup Set may contain multiple filespaces, and so getting to the data you want within the composite may take time. (Watch out: If you specify a destination other than the original location, data from all file spaces is restored to the location you specify.) - There is no table of contents for backup sets: The entire tape or set has to be read for each restore or query - which explains why a Query BACKUPSETContents is about as time-consuming as an actual restoral. See also "Restoral performance", as general considerations apply. Backup Set TOC support Was fully introduced in TSM 5.4. Backup Set volumes not checked in SELECT COUNT(VOLUME_NAME) FROM VOLHISTORY WHERE TYPE='BACKUPSET' AND VOLUME_NAME NOT IN (SELECT VOLUME_NAME FROM LIBVOLUMES) Backup Sets, report SELECT VOLUME_NAME FROM VOLHISTORY WHERE TYPE='BACKUPSET' Backup Sets, report number SELECT COUNT(VOLUME_NAME) FROM VOLHISTORY WHERE TYPE='BACKUPSET' Backup skips some PC disks Possible causes: (skipping) - Options file updated to add disk, but scheduler process not restarted. - Drive improperly labeled. - Drive was relabeled since PC reboot or since TSM client was started. - The permissions on the drive are wrong. - Drive attributes differ from those of drives which *will* backup. - Give TSM full control to the root on each drive (may have been run by SYSTEM account, lacking root access). - Msgmode is QUIET instead of VERBOSE, so you see no messages if nothing goes wrong. - TSM client code may be defective such that it fails if the disk label is in mixed case, rather than all upper or lower. Backup skips some Unix files An obvious cause for this occurring is that the file matches an Exclude. Another cause: The Unix client manual advises that skipping can occur when the LANG environment variable is set to C, POSIX (limiting the valid characters to those with ASCII codes less than 128), or other values with limitations for valid characters, and the file name contains characters with ASCII codes higher than 127. Backup "stalled" Many ADSM customers complain that their client backup is "stalled". In fact, it is almost always the case that it is processing, simply taking longer than the person thinks. In traditional incremental backups, the client must get from the server a list of all files that it has for the filespace, and then run through its file system, comparing each file against that list to see if it warrants backup. That entails considerable server database work, network traffic, client CPU time, and client I/O...which is aggravated by overpopulated directories. Summary advice: give it time. Backup status, see from server Do 'Query EVent * *' and look in the Status column to find one of: Future, Started, Completed, Failed, In Progress, Missed, Pending, Restarted, Severed, Uncertain. BAckup STGpool *SM server operation to create a backup copy of a storage pool in a Copy Storage Pool (by definition on serial medium, i.e., tape). Syntax: 'BAckup STGpool PrimaryPoolName CopyPoolName [MAXPRocess=N] [Preview=No|Yes|VOLumesonly] [Wait=No|Yes]' Note that storage pool backups are incremental in nature so you only produce copies of files that have not already been copied. (It is incremental in the sense of adding new objects to the backup storage pool. It is not exactly like a client incremental backup operation: BAckup STGpool itself does not cause objects to be identified as deletable from the *SM database. It is Expire Inventory that rids the backup storage pool of obsolete objects.) So, if you cancel a BAckup STGpool, the next invocation will pick up where the prior one left off. It seems to operate by first taking time to generate a list of files to back up, then it operates from that list (so, any data arriving after that list construction point does not participate in this backup stgpool instance). Order of files backup: most recent data first, then work back in time. BAckup STGpool copies data: it does not examine the data for issues...you need to use AUDit Volume for that, optionally using CRC data. Only one backup may be started per storage pool: attempting to start a second results in error message "Backup already active for pool ___". MAXPRocess: Specify only as many as you will have available mount points or drives to service them (DEVclass MOUNTLimit, less any drives already in use or unavailable (Query DRive)). Each process will select a node and copy all the files for that node. Processes that finish early will quit. The last surviving process should be expected to go on to other nodes' data in the storage pool. If you don't actually get that many processes, it could be due to the number of mount points or there being too few nodes represented in the stgpool data. (See IBM Technote 1234463.) Elapsed time cannot be less than the time to process the largest client data set. Beware using all the tape drives: migration is a lower priority process and thus can be stuck for hours waiting for BAckup STGpool to end, which can result in irate Archive users. MAXPRocess and preemption: If you invoked BAckup STGpool to use all drives and a scheduled Backup DB started, the Backup DB process would pre-empt one of the BAckup STGpool processes to gain access to a drive (msg ANR1440I): the other BAckup STGpool processes continue unaffected. (TSM will not reinitiate the terminated process after the preempting process has completed.) Preview: Reveals the number of files and bytes to be backed up and a list of the primary storage pool volumes that would be mounted. You cannot backup a storage pool on one computer architecture and restore it on another: use Export/Import. If a client is introducing files to a primary storage pool while that pool is being backed up to a copy storage pool, the new files may get copied to the copy storage pool, depending upon the progress that the BAckup STGpool has made. If a given BAckup STGpool is already in progress, attempting to start another with the same source and destination storage pools results in ANR2457E. Preemption: BAckup STGpool will wait until needed tape drives are available: it does not preempt Backups or HSM Recalls or even Reclamation. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting - as archive type files). Missing features: The ability to limit how long it runs or how many tapes it consumes - something that sites need to keep it from using up all scratches. Msgs: ANR0984I, ANR2110I (start); ANR1228I (one for each input volume); ANR1212I, ANR0986I (reports process, number of files, and bytes), ANR1214I (reports storage pool name, number of files, and bytes), ANR1221E (if insufficient space in copy storage pool) IBM Technotes: 1208545 See also: Aggregates BAckup STGpool, estimate requirements Use the Preview option. BAckup STGpool, how to stop If you need to stop the backup prematurely, you can do one of: - CANcel PRocess on each of its processes. But: you need to know the process numbers, and so can't effect the stop via an administrative schedule or server script. (The next BAckup STGpool will pick up where the prior one left off.) - UPDate STGpool ... ACCess=READOnly This will conveniently cause all the backup processes to stop after they have finished with the file they are currently working on. In the Activity Log you will find message ANR1221E, saying that the process terminated because of insufficient space. (Updating the storage pool back to READWrite before a process stops will prevent the process from being stopped: it has to transition to the next file for it to see the READOnly status.) - Perform the process lookup and cancel from an OS level (e.g., Perl) facility where you can flexibly process query output via dsmadmc. BAckup STGpool, minimize time To minimize the time for the operation: - Perform the operation when nothing else is going on in ADSM; - Maximize your TSM database Cache Hit Pct. (standard tuning); - Maximize the 'BAckup STGpool' MAXPRocess number to: The lesser of the number of tape drives or nodes available when backing up disk pools (which needs tape drives only for the outputs); The lesser of either half the number of tape drives or the number of nodes when backing up tape pools (which needs tape drives for both input and output). - If you have an odd number of tape drives during a tape pool backup, one drive will likely end up with a tape lingering it after stgpool backup is done with that tape, and *SM's rotational re-use of the drive will have to wait for a dismount. So for the duration of the storage pool backup, consider having your DEVclass MOUNTRetention value 1 to assure that the drive is ready for the next mount. - If you have plenty of tapes, consider marking previous stgpool backup tapes read-only such that ADSM will always perform the backup to an empty tape and so not have to take time to change tapes when it fills last night's. BAckup STGpool, order within hierarchy When performing a Backup Stgpool on a storage pool hierarchy, it should be done from the top of the hierarchy to the bottom: you should not skip around (as for example doing the third level, then the first level, then the second). Remember that files migrate downward in the hierarchy, not upward. If you do the Backup Stgpool in the same downward order, you will guarantee not missing files which may have migrated in between storage pool backups. BAckup STGpool and Migration It is best not to have a BAckup STGpool running on the same storage pool from which Migration is occurring, because they will interfere with one another. In performing Query PRocess where both are running, I've actually seen the Backup Stgpool Files Backed Up and Bytes Backed Up values *decrease* in repeated queries, which indicates that Migration grabbed and migrated files before the Backup Stgpool could! BAckup STGpool statistics See: SHow TRANSFERSTATS BAckup STGpool taking too long Some possible causes... More data: This is the simplest cause, where clients are simply sending more data in to TSM storage. This is easy to see in ANE message client summary statistics or the TSM accounting log. Busy disk storage pool: Where the source storage pool is disk, if this operation is attempted while data is flowing into the disk storage pool from clients, there will be contention and considerable overhead slowing it down. Tape mounts: Waiting for drives to become available, and waiting for mounts and dismounts to complete. If the source of the backup is a tape storage pool, consider that collocation can make for many tapes. This is reflected in the server Activity Log. Marginal media: The oxide or general condition of the tape makes it tough for the input tape drive to read or the output tape drive to write, causing lingering on a tape block for some time, laboring until it sucessfully completes the I/O - and may not give any indication to the operating system that it had to undertake this extra effort and time. To analyze: Observe via 'Query Process'. ostensibly seeing the Files count repeatedly remaining contant as a file of just modest file size is copied. But is it the input or output volume? To determine, do 'UPDate Volume ______ ACCess=READOnly' on the output volume: this will cause the BAckup STGpool to switch to a new output volume. If subsequent copying suffers no delay, then the output tape was the problem; else it was probably the input volume that was troublesome. While the operation proceeds, return the prior output volume to READWrite state, which will tend to cause it to be used for output when the current output volume fills, at which time a different input volume is likely. If copying becomes sluggish again, then certainly that volume is the problem. Tape cartridge memory problems: Known as the "VCR data problem" for 3590 tapes, this can also afflict LTO tape cartridges (Technote 1209563). Preemption: Various other server processes and client sessions can preempt the Backup Stgpool - which would cause it to be terminated and need to be restarted some time later, meaning that more pending data can build up in the mean time, resulting in a longer runtime when it finally does occur. TSM server software defects: Search at the TSM Support Page for "backup stgpool degraded" or "backup stgpool performance" to see these. As of late 2005, there is a performance problem introduced by APAR IC45931's fix, introduced in the 5.2.6.2 server level, which causes volume contents tracking optimizations to be lost in re-examining volumes candidacy. BAckup STGPOOLHierarchy There is no such command - but there should be: The point of a storage pool hierarchy is that if a file object is in any storage pool within the hierarchy, that is "there". In concert with this concept, there should be a command which generally backs up the hierarchy to backup storage. The existing command, BAckup STGpool is antithetical, in that it addresses a physical subset of the whole, logical hierarchy: it is both a nuisance to have to invoke against each primary storage pool in turn, and problematic in that a file which moves in the hierarchy might be missed by the piecemeal backup. Backup storage pool See also: Copy Storage Pool Backup storage pool, disk? Beware using a disk (devtype DISK) as (disk buffer for Backup) the 1st level of backup storage pool hierarchy. TSM storage hierarchy rules specify that if a given file is too big to fit into the (remaining) space of a storage pool, that it should instead go directly down to the next level (presumably, tape). What can happen is that the disk storage pool can get full because migration cannot occur fast enough, and the backup will instead try to go directly to tape, which can result in the client session getting hung up on a Media Wait (MediaW status). Mitigation: Use MAXSize on the disk storage pool, to keep large files from using it up quickly. However, many clients back up large files routinely, so you end up with the old situation of clients waiting for tape drives. Another problem with using this kind of disk buffering for Backups is that the migration generates locks which interfere with Backup, worse on a multiprocessor system. If TSM is able to migrate at all, it will be thrashing trying to keep up, continually re-examining the storage pool contents to fulfill its migration rules of largest file sizes and nodes. Lastly, you have to be concerned that your backup data may not all be on tape: being on disk, it represents an incomplete tape data set, and jeopardizes recoverability of that filespace, should the disk go bad. See also: Backup through disk storage pool Backup success message "Successful incremental backup of 'FileSystemName'", which has no message number. Backup successful? You can check the 11th field of the dsmaccnt.log. BACKup SYSTEMObject See: dsmc BACKup SYSTEMObject Backup SYSTEMSErvices No longer used, as of TSM 5.5, in that system state and system services are now backed up and restored as a single entity. Backup table See: BACKUPS Backup taking too long Sometimes it may seem that the backup (seems like it "hangs" client is hung, but almost always it is (hung, freezes, sluggish, slow)) active. To determine why it's taking as long as it is, you need to take a close look at the system and see if it or TSM is really hung, or simply slow or blocked. Examination of the evolutionary context of the client might show that the number of files on it has been steadily increasing, and so the number in TSM storage, and thus an increasingly burdensome inventory obtained from the server during a dsmc Incremental. The amount of available CPU power and memory at the time are principal factors: it may be that the system's load has evolved whereas its real memory has not, and it needs more. Use your opsys monitoring tools to determine if the TSM client is actually busy in terms of CPU time and I/O in examination of the file system: the backup may simply be still be looking for new files to send to server storage. The monitor should show I/O and CPU activity proceeding. In the client log, look for the backup lingering in a particular area of the file system, which can indicate a bad file or disk area, where a chkdsk or the like may uncover a problem. You could also try a comparative INCRBYDate type backup and see if that does better, which would indicate difficulty dealing with the size of the inventory. TSM Journaling may also be an option. Examine your TSM server Activity Log: I have seen such hangs occur when there were "insufficient mount points available" (message ANR0535W) - a condition from which the client may or may not recover, where 'dsmc schedule' may have to be restarted. In some "cranky" OS environments (NetWare), a locked file in the file system may cause the backup to get stuck at that point, due to poor handling by the OS. Consider doing client tracing to identify where the time is concentrated. (See "CLIENT TRACING" section at bottom of this document.) If not hung, then one or more of the many performance affectors may be at play. See: Backup performance Backup through disk storage pool It is traditional to back up directly to (disk buffer) tape, but you can do it through a storage pool hierarchy with a disk storage pool ahead of tape. Advantages: - Immediacy: no waiting for tape mount. - No queueing for limited tape drives when collocation is in effect. - 'BAckup STGpool' can be faster, to the extent that the backup data is still on disk, as opposed to a tape-to-tape operation. Disadvantages: - TSM server is busier, having to move the data first to disk, then to tape (with corresponding database updates). - There can still be some delays for tape mounts, as migration works to drain the disk storage pool. - Backup data tends to be on disk and tape, rather than all on tape. (This can be mitigated by setting migration levels to 0% low and 0% high to force all the data to tape.) - A considerable amount of disk space is dedicated to a transient operation. - With some tape drive technology you may get better throughput by going directly to tape because the streaming speed of some tape technology is by nature faster than disk. With better tape technology, the tape is always positioned, ready for writing whereas the rotating disk has to wait for its spot to come around again. And, the compression in tape drive hardware can result in the effective write speed exceeding even the streaming rate spec. - If the disk pool fills, incoming client sessions will go into media wait and will thereafter remain tape-destined even if the disk pool empties. Note also that this will result in *two* tapes being in Filling state in the tape storage pool: one for the direct-to-tape operation, and another for the disk storage pool migration to drain to tape. - In *SM database restoral, part of that procedure is to audit any disk storage pool volumes; so a good-sized backup storage pool on disk will add to that time. See also: Backup storage pool, disk? Backup version An object, directory, or file space that a user has backed up that resides in a backup storage pool in ADSM storage. The most recent is the "active" version; older ones are "inactive" versions. Versions are controlled in the Backup Copy Group definition (see 'DEFine COpygroup'). "VERExists" limits the number of versions, with the excess being deleted - regardless of the RETExtra which would otherwise keep them around. "VERDeleted" limits versions kept of deleted files. "RETExtra" is the retention period, in days, for all but the latest backup version. "RETOnly" is the retention period, in days, for the sole remaining backup version of a file deleted from the client file system. Note that individual backups cannot be deleted from either the client or server. See Active Version and Inactive Version. Backup version, make unrecoverable First, optionally, move the file on the client system to another directory. 2nd, in the original directory replace the file with a small stub of junk. 3rd, do a selective backup of the stub as many times as you have 'versions' set in the management class. This will make any backups of the real file unrestorable. 4th, change the options to stop backing up the real file. There is a way to "trick" ADSM into deleting the backups: Code an EXCLUDE statement for the file, then perform an incremental backup. This will cause existing backup versions to be flagged for deletion. Next, run EXPIre Inventory, and voila! The versions will be deleted. Backup via Schedule, on NT Running backups on NT systems through "NT services" can be problematic: If you choose Logon As and assign it an ADMIN ID with all the necessary privileges you can think of, it still may not work. Instead, double-click on the ADSM scheduler and click on the button to run the service as the local System Account. BAckup VOLHistory TSM server command to back up the volume history data to an ordinary file system file. This causes TSM to run through its database, formatting and writing volume history entries to that flat file. At the top of the file is a prolog block identifying it and when it was created. Entries are written by ascending timestamp, most recent at the bottom. Syntax: 'BAckup VOLHistory [Filenames=___]' (No entry is written to the Activity Log to indicate that this was performed.) Note that you need not explicitly execute this command if the VOLumeHistory option is coded in the server options file, in that the option causes RSM to automatically back up the volume history whenever it does something like a database backup. However, RSM does not automatically back up the volume history if a 'DELete VOLHistory' is performed, so you may want to manually invoke the backup then. This command is also useful where the flat file was inadvertently destroyed and TSM is in in a period where it would not rewrite the file; or, it is desired to capture a snapshot of the file in a different location, as perhaps in nightly system housekeeping; or, the site realizes the impact of TSM rewriting a multi-megabyte file during prime time so instead shedules this command to do the deed during quieter times, knowing that there will not be significant entries in the mean time. See also: Backup Series; VOLUMEHistory Backup MB, over last 24 hours SELECT SUM(BYTES)/1000/1000 AS "MB_per_day" FROM SUMMARY WHERE ACTIVITY='BACKUP' AND (CURRENT_TIMESTAMP-END_TIME)HOURS <= 24 HOURS Backup vs. Archive, differences See "Archive vs. Selective Backup". Backup vs. Migration, priorities Backups have priority over migration. Backup without expiration Use INCRBYDate (q.v). Backup without rebinding In AIX, accomplish by remounting the file system on a special mount point name; or, on a PC, change the volume name/label of the hard drive. Then back up with a different, special management class. This will cause a full backup and create a new filespace name. Another approach would be to do the rename on the other end: rename the ADSM filespace and then back up with the usual management class, which will cause a full backup to occur and regenerate the former filespace afresh. Backup won't happen See: Backup skips some PC disks Backup/Archive client The standard, common TSM client, consisting of a command line interface (CLI) and graphical user interface (GUI). It is important to understand that this client is for the data preservation of the data in ordinary file systems. It is *not* capable of preserving and recovering your operating system boot disk: that requires a "bare metal" regimen, as described in Redbooks, and possible use of allied Tivoli products, such as IBM Tivoli Storage Manager for System Backup and Recovery. See also: Bare Metal Restore BACKUP_DIR Part of Tivoli Data Protection for Oracle. Should be listed in your tdpo.opt file. It specifies the client directory which wil be used for storing the files on your server. If you list the filespaces created for that node on the server after a succesful backup, you will see one filespace with the same name as you BACKUP_DIR. Backup-archive client A program that runs on a file server, PC, or workstation that provides a means for ADSM users to back up, archive, restore, and retrieve objects. Contrast with application client and administrative client. BackupDomainList The title under which DOMain-named file systems appear in the output of the client command 'Query Options'. BackupExec Veritas Backup Exec product. A dubious aspect is the handling of open files, per a selectable option: it copies a 'stub' to tape, allowing for it to skip the file. Apparently, most of the time when you restore the file, it's either a null file or a partial copy of the original, either way being useless. http://www.BackupExec.com/ BACKUPFULL In 'Query VOLHistory' or 'DSMSERV DISPlay DBBackupvolumes' or VOLHISTORY database TYPE output, this is the Volume Type to say that volume was used for a full backup of the database. See: VOLHISTORY BACKUPINCR In 'Query VOLHistory' or VOLHISTORY database TYPE output, this is the Volume Type to say that volume was used for an incremental backup of the database. See: VOLHISTORY BACKUPRegistry Option for NT systems only, to specify whether ADSM should back up the NT Registry during incremental backups. Specify: Yes or No Default: Yes The Registry backup works by using an NT API function to write the contents of the Registry into the adsm.sys directory. (The documentation has erroneously been suggesting that the system32\config Registry area should be Excluded from the backup: it should not). The files written have the same layout as the native registry files in \winnt\system32\config. You can back up just the Registry with the BACKup Registry command. In Windows 2000 and beyond, you can use the DOMain option to control the backup of system objects. Ref: redbook "Windows NT Backup and Recovery with ADSM" (SG24-2231): topic 4.1.2.1 Registry Backup BACKUPS (BACKUPS table) SQL: TSM database table containing info about all active and inactive files backed up. Along with ARCHIVES and CONTENTS, constitutes the bulk of the *SM database contents. Columns: NODE_NAME, FILESPACE_NAME, FILESPACE_ID, STATE (ACTIVE_VERSION, INACTIVE_VERSION), TYPE ('FILE' or 'DIR'), HL_NAME, (intervening directories) LL_NAME, (file name) OBJECT_ID, BACKUP_DATE, DEACTIVATE_DATE, OWNER, CLASS_NAME. Notes: Does not contain info about file attributes (size, permissions, timestamps, etc.) or the volumes which the objects are on (see the Contents table) or the storage pool in which the file resides. The OBJECT_ID uniquely identifies this file among all its versions. However, there is no corresponding ID in the CONTENTS table such that you could get the containing volume name from it. (There is only the undocumented SHow BFO command.) Note that this is usually a very large table in most sites. A Select on it can take a very long time, if very specific Where options are not employed in the search. In a Select, you can do CONCAT(HL_NAME, LL_NAME) to stick those two components together, to make the output more familiar; or concatenate the whole path by doing: SELECT FILESPACE_NAME || HL_NAME || LL_NAME FROM BACKUPS. See: CONTENTS; DEACTIVATE_DATE; HL_NAME; LL_NAME; OWNER; STATE; TYPE Backups, count of bytes received Use the Summary table, available in TSM 3.7+, like the following: Sum for today's backups, thus far: SELECT Dec((SUM(BYTES) - / (1024 * 1024 * 1024)),15) - as "BACKUP GB TODAY" from SUMMARY - where (DATE(END_TIME) = CURRENT DATE) - and ACTIVITY = 'BACKUP' More involved: SELECT SUM(BYTES) as Sum_Bytes - from SUMMARY - where (DATE(END_TIME) = CURRENT DATE \ - 1 DAYS and TIME(END_TIME) >= \ '20.00.00') OR (DATE(END_TIME) = \ CURRENT DATE) and ACTIVITY = 'BACKUP' See also: Summary table Backups, parallelize Going to a disk pool first is one way; then the data migrates to tape. To go directly to tape: You may need to define your STGpool with COLlocation=FILespace to achieve such results; else *SM will try to fill one tape at a time, making all other processes wait for access to the tape. Further subdivision is afforded via VIRTUALMountpoint. (Subdivide and conquer.) That may not be a good solution where what you are backing up is not a file system, but a commercial database backup via agent, or a buta backup, where each backup creates a separate filespace. In such situations you can use the approach of separate management classes, so as to have separate storage pools, but still using the same library and tape pool. If you have COLlocation=Yes (node) and need to force parallelization during a backup session, you can momentarily toggle the single, current output tape from READWrite to READOnly to incite *SM to have multiple output tapes. Backups, prevent There are times when you want to prevent backups from occurring, as when a restoral is running and fresh backups of the same file system would create version confusion in the restoral process, or where client nodes tend to inappropriately use the TSM client during the day, as in kicking off Backups at times when drives are needed for other scheduled tasks. You can prevent backups in several ways: In the *SM server: - LOCK Node, which prevents all access from the client - and which may be too extreme. - 'UPDate Node ... MAXNUMMP=0', to be in effect during the day, to prevent Backup and Archive, but allow Restore and Retrieve. In the *SM client: - In the Include-Exclude list, code EXCLUDE.FS for each file system. In general: - If the backups are performed via client schedule: Unfortunately, client schedules lack the ACTIVE= keyword such that we can render them inactive. Instead, you can do a temporary DELete ASSOCiation to divorce the node from the backup schedule. - If the backups are being performed independently by the client: Do DISAble SESSions after the restoral starts, to allow it to proceed but prevent further client sessions. Or you might do UPDate STGpool ... ACCess=READOnly, which would certainly prevent backups from proceeding. See also: "Restorals, prevent" for another approach Backups go directly to tape, not disk Some shops have their backups first go as intended to a disk storage pool, with migration to tape. But they may find backups going directly to tape. Or, you may perform a Backup Stgpool from the tape level of the storage pool hierarchy, just in case, and find fresh data there, which you don't expect. Possible causes: - File too big: A file cannot be accommodated in the storage pool if its size exceeds either the MAXSize or physical size of the storage pool. - Pool too full: If the amount of data in the storage pool reaches HIghmig, or the absolute capacity of the storage pool, no more can fit. - Your management classes aren't what you think they are: Some of your clients may be using an unexpected management class whose Copy Group sends the data directly to tape. Or, data may be going to tape implicitly, as in the case of Windows backups where the directories don't necessarily go to the same management class as the data, but rather to the MC having the longest RETOnly. In this case, look into DIRMc. - The backup occurred choosing a management class which goes to tape. - Maybe only some of the data is going directly to tape: the directories. Remember that TSM by default stores directories under the Management Class with the longest retention, modifiable via DIRMc. - Your storage pool hierarchy was changed by someone. - See also "ANS1329S" discussion about COMPRESSAlways effects. - Your client (perhaps DB2 backup) may be overestimating the size of the object being backed up. - Um, the stgpool Access mode is Read/Write, yes? - Did you change the client Include options to choose a new management class, which goes directly to tape, but neglect to restart the client scheduler for it to pick up the change? - If your disk stgpool has CAChe=Yes, you may be fooled into believing that the data has not migrated when in fact it has. A good thing to check: Do a short Select * From Backups... to examine some of those files, and see what they are actually using for a Management Class. If puzzled, perform analysis: Pore over your Activity Log for the period, looking for full-stgpool migrations. Where direct-to-tape operations occur, client backup log statistics (and the TSM Accounting records) will reflect media waits. Backups without expiration Use INCRBYDate (q.v). Backupset See: Backup Set baclient Shorthand for Backup-Archive Client. Bad tape, how to handle A tape may experience ANR8944E or similar media error messages, indicating that data previously written to it cannot now be read. So, how best to handle such a situation? 1. Gather data. Capture the output of 'Query Volume ...... F=D' for later evaluation. Particularly observe the last write date for the volume: it is typically the case that longer ago means harder to read, which is partially due to tape drive variations over time, and partially to media conditions. Also do 'Query CONtent VolName ...... DAmaged=Yes' to get a sense of issues with the volume. 2. If the volume is marked Damaged, see if AUDit Volume ... Fix=No will un-mark it. If not, go to 3. 3. Perform MOVe Data - repeatedly. The initial MOVe Data will typically copy most of the files off the tape, but will report Unreadable Files during execution and ultimately ANR0986I FAILURE. Repeating the operation causes the tape to relax and, usually, to be mounted on a different drive, where all drives are slightly different and will often be able to read a few more of the problem files. An example: I had such a bad tape. The first MOVe Data got all but 7 files off of it. The next MOVe Data attempt got 2 more. The next, 1 more. The next, 2 more. The next, none. This action gets as much data as possible off the tape before the next operation. 4. Perform RESTORE Volume. This will mark the volume Destroyed, and restore its remaining contents to its same stgpool from Copy Storage Pool copies of the files. Having done MOVe Data beforehand, you minimize the number of tape mounts needed to accomplish that. 5. Perform CHECKOut LIBVolume. You don't want TSM trying to use a dubious tape. 6. Perform diagnostics to assess the condition of the tape. Well, we know that it was difficult to read the tape - but is that the tape's fault, or the fault of the drive at the time? (It takes two to tango.) Evaluate the tape by using tapeutil's "rwtest" function. If it passes, check the tape in and thereafter monitor its use. If it fails: if still under warranty, return the tape to the supplier for replacement or refund; if out of warranty destroy it. I would additionally physically inspect the tape, pulling out a meter or so of it to have a look: in some cases, a drive has a mechanical defect and is chewing up tapes - where it will mangle all your tapes if given the opportunity. (In 3590 experience, I had a case where the CE extricated a "bad tape" from the drive and left it for me - where the tape's leader block was missing... which begs the question of where it was: still inside the drive, causing mayhem with other inserted tapes.) See also: Tape problems handling bak DFS command to start the backup and restore operations that direct them to buta. See also: buta; butc; DFS bakserver BackUp Server: DFS program to manage info in its database, serving recording and query operations. See also "buserver" of AFS. Barcode See CHECKLabel Barcode, examine tape to assure that 'mtlib -l /dev/lmcp0 -a -V VolName' it is physically in library) Causes the robot to move to the tape and scan its barcode. 'mtlib -l /dev/lmcp0 -a -L FileName' can be used to examine tapes en mass, by taking the first volser on each line of the file. Barcode volume name length May be 6 or 8 characters for LTO1 (Ultrium1), LTO2 (Ultrium2), 3592. But must be 8 chars for LTO3 (Ultrium3). TSM gets the volume name from the library per a combination of the library configuration settings (for six- or eight-character volume names) and the tape device driver being used. Msgs: ANR8787W; ANR8821E; ANR9754I IBM Technotes: 1217789; 1212111 Bare Metal Restore (BMR) This is the recovery of a computer system from the point of a blank system disk, as for example when the boot disk failed and had to be replaced. Such recovery requires a data preservation and recovery regimen different from and separate from normal Backup/Archive operations, in that B/A is not designed to handle BMR needs. BMR is the realm of allied Tivoli products, such as IBM Tivoli Storage Manager for System Backup and Recovery, and 3rd party providers, such as: Christie (http://www.cristie.com) Storix (http://www.storix.com/tsm) Redbooks: "Disaster Recovery Strategies with Tivoli Storage Management"; "ADSM Client Disaster Recovery: Bare Metal Restore". Users group: TSM AIX Bare Metal Restore Special interest group. Subscribe by sending email to TSMAIXBMR-subscribe@yahoogroups.com or via the yahoogroups web interface at http://www.yahoogroups.com See also: BMR Bare Metal Restore, Windows? BMR of Windows is highly problematic, due to the Registry orientation of the operating system and hardware dependencies. I.e., don't expect it to work. As one customer put it: "Windows is the least transportable and least modular OS ever." On the IBM website is helpful article "Modified instructions for complete Restores of Windows Systems". Batch mode Start an "administrative client session" to issue a single server command or macro, via the command: 'dsmadmc -id=YOURID -pa=YOURPW CMDNAME', as described in the ADSM Administrator's Reference. BCR Bar Code Reader. BCV EMC disk: Business Continuance Volumes. BEGin EVentlogging Server command to begin logging events to one or more receivers. A receiver for which event logging has begun is an active receiver. When the server is started, event logging automatically begins for the console and activity log and for any receivers that are started automatically based on entries in the server options file. You can use this command to begin logging events to receivers for which event logging is not automatically started at server startup. You can also use this command after you have disabled event logging to one or more receivers. Syntax: 'BEGin EVentlogging [ALL|CONSOLE|ACTLOG |EVENTSERVER|FILE|FILETEXT|SNMP |TIVOLI|USEREXIT]' See: User exit BeginObjData "last verb" as seen where a client is watiting for media to mount when retrieving Archive data or performing a Restore. Benchmark Surprisingly, many sites simply buy hardware and start using it, and then maybe wonder if it is providing its full performance potential. What should happen is that the selection of hardware should be based upon performance specifications published by the vendor; then, once it is made operational at the customer site, the customer should conduct tests to measure and record its actual performance, under ideal conditions. That is a benchmark. Going through this process gives you a basis for accepting or rejecting the new facilities and, if you accept them, you have a basis for later comparing daily performance to know when problems or capacity issues are occurring. BETWEEN SQL clause for range selection. Example: Select VOLUME_NAME, PCT_UTILIZED - from VOLUMES where - STGPOOL_NAME = 'FSERV.STGP_COPY' - and PCT_UTILIZED between 60 and 70 .BFS File name extension created by the TSM server for FILE devtype scratch volumes which contain client data. Ref: Admin Guide, Defining and Updating FILE Device Classes See also: .DBB; .DMP; .EXP; FILE Billing products Chargeback/TSM, an optional plugin to Servergraph/TSM (www.servergraph.com). Bindery A database that consists of three system files for a NetWare 3.11 server. The files contain user IDs and user restrictions. The Bindery is the first thing that ADSM backs up during an Incremental Backup. ADSM issues a Close to the Bindery, followed by anOpen (about 2 seconds later). This causes the Bindery to be written to disk, so that it can be backed up. Binding The process of associating an object with a management class name, and hence a set of rules. See "Files, binding to management class" Bit Vector Database concept for efficiently storing sparse data. Database records usually consist of multiple fields. In some db applications, only a few of the fields may have data: if you simply allocate space for all possible fields in database records, you will end up with a lot of empty space inflating your db. To save space you can instead use a prefacing sequence of bits in each database record which, left to right, correspond to the data fields in the db record, and in the db record you allocate space only for the data fields which contain data for this record. If the bit's value is zero, it means that the field had no data and does not participate in this record. If the bit's value is one, it means that the field does participate in the record and its value can be found in the db record, in the position relative to the other "one" values. Example: A university database is defined with records consisting of four fields: Person name, College, Campus address, Campus phone number. But not all students or staff members reside on campus, so allocating space for the last two fields would be wasteful. In the case of staff member John Doe, the last three fields are unnecessary, and so his database record would have a bit vector value of 1000, meaning that only his name appears in the database record. Bitfile Internal terminology denoting a client object, or group of objects (Aggregate) contained in a storage pool. Some objects have an inventory entry, but no corresponding bitfile. Sometimes seen like "0.29131728", which is notation specifying an OBJECT_ID HIGH portion (0) and an OBJECT_ID LOW portion (29131728). (OBJECT_ID appears in the Archives and Backups database tables.) Note that in the BACKUPS table, the OBJECT_ID is just the low portion. See also: Cluster; OBJECT_ID Bkup Backup file type, in Query CONtent report. Other types: Arch, SpMg "Bleeding edge" A vivid term to denote the perils of committing to leading edge technology, where you are likely to suffer numerous defects in the new software or hardware. A good example of this is APAR IC47061. Blksize See: Block size used for removable media; FILE devclass blksize Block size used for removable media *SM sets the block size of all its (tape, optical disc) blksize tape/optical devices internally. Setting it in smit has no effect, except for tar, dd, and any other applications that do not set it themselves. TSM uses variable blocking at it writes to tape drives; ie. blocksize is 0. Generally however, for 3590 it will attempt to write out a full 256 KB block, which is the largest allowed blocksize with variable blocking. Some blocks, eg. the last block in a series, will be shorter, through TSM 5.2. (Technote 1267820 explains that as of TSM 5.3, all written blocks will have a uniform size of 256 KB, even if this means wasting space in the last block, where this change was made to optimize tape performance.) AIX: use 'lsattr -E -l rmt1' to verify; but will typically show "block_size 0" which reflects variable length. DLT: ADSMv3 sets blksize to 256 KB. Windows: Had historically been problematic in getting adapters to use a block size large enough to achieve adequate performance. TSM 5.3 for Windows addressed this, formalized a maximum block size of up to 256 KB in writing to tape through HBAs, controlled by a new DSMMAXSG utility, which modifies one Registry key for every HBA driver on your system. The name of the key is MaximumSGList. IBM Technotes: 1110623; 1240946 See also: FILE devclass blksize Blurred files General backup ramifications term derived from photography, where imaging a moving object results in its image being indistinct. If a file is being updated as it is being backed up, that imaging is "blurred". BMR Bare Metal Restore. The Kernel Group has a product of that name. However, as of 2001/02 TKG has not been committing the resources required to develop the product, given the lack of SSA disk, raw volume support, and Windows 2000. URL: http://www.tkg.com/products.html See also: Bare Metal Restore BOOKS Old ADSM Client User Options file (dsm.opt) option for making the ADSM online publications available through the ADSM GUI's Help menu, View Books item. The option specifies the command to invoke, which in Unix would be 'dtext', for the DynaText hypertext browser (/usr/bin/dtext -> /usr/ebt/bin/dtext). The books lived in /usr/ebt/adsm/. The component product name was "adsmbook.obj". BOT A Beginning Of Tape tape mark. See also: EOT BP Shorthand for IBM Business Partner. BPX-Tcp/Ip The OpenEdition sockets API is used by the Tivoli Storage Manager for MVS 3.7 when the server is running under OS/390 R5 or greater. Therefore, "BPX-Tcp/Ip" is displayed when the server is using the OpenEdition sockets API (callable service). "BPX" are the first three characters of the names of the API functions that are being used by the server. Braces See: {}; File space, explicit specification BRMS AS/400 (iSeries) Backup Recovery and Media Services, a fully automated backup, recovery, and media management strategy used with OS/400 on the iSeries server. The iSeries TSM client referred to as the BRMS Application Client to TSM. The BRMS Application Client function is based on a unique implementation of the TSM Application Programming Interface (API) and does not provide functions typically available with TSM Backup/Archive clients. The solution it integrated into BRMS and has a native iSeries look and feel. There is no TSM command line or GUI interfaces. The BRMS Application client is not a Tivoli Backup/Archive client nor a Tivoli Data Protection Client. You can use BRMS to save low-volume user data on distributed iSeries systems to any Tivoli Storage Manager (TSM) server. You can do this by using a BRMS component called the BRMS Application Client, which is provided with the base BRMS product. The BRMS Application Client has the look and feel of BRMS and iSeries. It is not a TSM Backup or Archive client. There is little difference in the way BRMS saves objects to TSM servers and the way it saves objects to media. A TSM server is just another device that BRMS uses for your save and restore operations. BRMS backups can span volumes. There is reportedly a well-known throughput bottleneck with BRMS. (600Kb/s is actually quite a respectable figure for BRMS.) Ref: In IBM webspace you can search for "TSM frequently asked questions" and "TSM tips and techniques" which talk of BRMS in relation to TSM. BSAInit Initialization function in the X/Open (XBSA) version of the TSM API. Common error codes (accompanied by dsierror.log messages): 96 Option file not found. Either employ the DSMI_CONFIG environment variable to point to it, or establish a link from the API directory to the prevailing options file. See also: XBSA BU Sometime abbreviation for "backup". BU_COPYGROUPS Backup copy groups table in the TSM database. Columns: DOMAIN_NAME, SET_NAME, CLASS_NAME, COPYGROUP_NAME, VEREXISTS, VERDELETED, RETEXTRA, RETONLY, MODE, SERIALIZATION, FREQUENCY, DESTINATION, TOC_DESTINATION, CHG_TIME, CHG_ADMIN, PROFILE Buffer pool statistics, reset 'RESet BUFPool' BUFFPoolsize You mean: BUFPoolsize BUFPoolsize Definition in the server options file. Specifies the size of the database buffer pool in memory, in KBytes (i.e. 8192 = 8192 KB = 8 MB). A larger buffer pool can keep more database pages in the memory cache and lessen I/O to the database. As the ADSM (3.1) Performance Tuning Guide advised: While increasing BUFPoolsize, care must be taken not to cause paging in the virtual memory system. Monitor system memory usage to check for any increased paging after the BUFPoolsize change. (Use the 'RESet BUFPool' command to reset the statistics.) Note that a TSM server, like servers of all kinds, benefits from the host system having abundant real memory. Skimping is counter-productive. The minimum value is 256 KB; the maximum value is limited only by available virtual memory. Evaluate performance by looking at 'Query DB F=D' output Cache values. A "Cache Hit Pct." of 98% is a reasonable target. Default: 512 (KB) To change the value, either directly edit the server options file and restart the server, or use SETOPT BUFPoolsize and perform a RESet BUFPool. Note that boosting the bufpoolsize to a higher value will generally work without issue, but trying to reduce it may not be possible with the current demands on the TSM server (error ANR0385I Could not free sufficient buffers to reach reduced BUFPoolsize.). You can have the server tune the value itself via the SELFTUNEBUFpoolsize option - but that self-tuning tends to be too limited. Note that entries on this option in the IBM information repository may misspell it as "buffpoolsize", "bufferpoolsize", or other variants. Ref: Installing the Server IBM Technotes: 1112140; 1208540; 1240168 See also: SETOPT BUFPoolsize; LOGPoolsize; RESet BUFPool; SELFTUNEBUFpoolsize BUFPoolsize server option, query 'Query OPTion BufPoolSize' The effects of the buffer pool size can be perceived via 'Query DB Format=Detailed' in the Cache Hit Pct. Bulk Eject category 3494 Library Manager category code FF11 for a tape volume to be deposited in the High-Capacity Output Facility. After the volume has been so deposited its volser is deleted from the inventory. Burst vs. sustained data rates Burst rate refers to the transfer of data which the hardware component happens to have immediately at hand, as in a cache buffer. Sustained rate refers to the long-term average where the hardware has to also get data by seeking to a position on the medium and reading from it, which takes much longer. Note that streaming is a special case, where no medium position is necessary, and reading can occur in a sequential manner, at optimal drive speed. bus_domination Attribute for tape drives on a SCSI bus. Should be set "Yes" only if the drive is the only device on the bus. buserver BackUp Server: AFS program to manage info in its database, serving recording and query operations. See also "bakserver" of DFS. Busy file See: Changed buta, BUTA (AFS) (Back Up To ADSM) is an ADSM API application which replaces the AFS butc. The "buta" programs are the ADSM agent programs that work with the native AFS volume backup system and send the data to ADSM. (The AFS buta and DFS buta are two similar but independent programs.) The buta tools only backup/restore at the volume level, so to get a single file you have to restore the volume to another location and then grovel for the file. This is why ADSM's AFS facilities are preferred. The "buta" backup style provides AFS disaster recovery. All of the necessary data is stored to restore AFS partitions to an AFS server, in the event of loss of a disk or server. It does not allow AFS users to backup and restore AFS data, per the ADSM backup model. All backup and restore operations require operator intervention. ADSM management classes do not control file retention and expiration for the AFS files data. Locking: The AFS volume is locked in the buta backup, but you should be backing up clone volumes, not the actuals. There is a paper published in the Decorum 97 Proceedings (from Transarc) describing the buta approach. As of AFS 3.6, butc itself supports backups to TSM, via XBSA (q.v.), meaning that buta will no longer be necessary. License: Its name is "Open Systems Environment", as per /usr/lpp/adsm/bin/README. The file backup client is installable from the adsm.afs.client installation file, and the DFS fileset backup agent is installable from adsm.butaafs.client. Executables: /usr/afs/buta/. See publication "AFS/DFS Backup Clients", SH26-4048 and http://www.storage.ibm.com/software/ adsm/adafsdfs.htm . There's a white paper available at: http://www.storage.ibm.com/software/ adsm/adwhdfs.htm Compare buta with "dsm.afs". See also: bak; XBSA buta (DFS) (Back Up To ADSM) is an ADSM API application which replaces the AFS butc. The "buta" programs are the ADSM agent programs that work with the native AFS fileset backup system and send the data to ADSM. (The AFS buta and DFS buta are two similar but independent programs.) The buta tools only backup/restore at the fileset level, so to get a single file you have to restore the fileset to another location and then grovel for the file. This is why ADSM's AFS facilities are preferred. Each dumped fileset (incremental or full) is sent to the ADSM server as a file whose name is the same as that of the fileset. The fileset dump files associated with a dump are stored within a single file space on the ADSM server, and the name of the file space is the dump-id string. The "buta" backup style provides DFS disaster recovery. All of the necessary data is stored to restore DFS aggregates to an DFS server, in the event of loss of a disk or server. It does not allow DFS users to backup and restore DFS data, per the ADSM backup model. All backup and restore operations require operator intervention. ADSM management classes do not control file retention and expiration for the DFS files data. Locking: The DFS fileset is locked in the buta backup, but you should be backing up clone filesets, not the actuals. License: Its name is "Open Systems Environment", as per /usr/lpp/adsm/bin/README. The file backup client is installable from the adsm.dfs.client installation file, and the DFS fileset backup agent is installable from adsm.butadfs.client. Executables: in /var/dce/dfs/buta/ . See publication "AFS/DFS Backup Clients", SH26-4048 and http://www.storage.ibm.com/software/ adsm/adafsdfs.htm . There's a white paper available at: http://www.storage.ibm.com/software/ adsm/adwhdfs.htm Compare buta with "dsm.dfs". See also: bak butc (AFS) Back Up Tape Coordinator: AFS volume dumps and restores are performed through this program, which reads and writes an attached tape device and then interacts with the buserver to record them. Butc is replaced by buta to instead perform the backups to ADSM. As of AFS 3.6, butc itself supports backups to TSM through XBSA (q.v.), meaning that buta will no longer be necessary. See also: bak butc (DFS) Back Up Tape Coordinator: DFS fileset dumps and restores are performed through this program, which reads and writes an attached tape device and then interacts with the buserver to record them. Butc is replaced by buta to instead perform the backups to ADSM. See also: bak bydate You mean -INCRBYDate (q.v.). C: vs C:\* specification C: refers to the entire drive, while C:\* refers to all files in the root of C: (and subdirectories as well if -SUBDIR=YES is specified). A C:\* backup will not cause the Registry System Objects to be backed up, whereas a C: backup will. C:\TSMLVSA The default location for SNAPSHOTCACHELocation, where LVSA places the Old Blocks File. Cache (storage pool) When files are migrated from disk storage pools, duplicate copies of the files may remain in disk storage ("cached") as long as TSM can afford the space, thus making for faster retrieval. As such, this is *not* a write-through cache: the caching only begins once the storage pool HIghmig value is exceeded. TSM will delete the cached disk files only when space is needed. This is why the Pct Util value in a 'Query Volume' or 'Query STGpool' report can look much higher than its defined "High Mig%" threshold value (Pct Util will always hover around 99% with Cache activated). Define HIghmig lower to assure the disk-stored files also being on tape, but at the expense of more tape action. When caching is in effect, the best way to get a sense of "real" storage pool utilization is via 'Query OCCupancy'. Note that the storage pool LOwmig value is effectively overridden to 0 when CAChe is in effect, because once migration starts, TSM wants to assure that everything is cached. You might as well define LOwmig as 0 to avoid confusion in this situation. Performance penalties: Requires additional database space and updating thereof. Can also result in disk fragmentation due to lingering files. Is best used for the disks which may be part of Archive and HSM storage pools, because of the likelihood of retrievals; but avoid use with disks leading a backup storage pool hierarchy, because such disks serve as buffers and so caching would be a waste of overhead. With caching, the storage pool Pct Migr value does not include cached data. See also the description of message ANR0534W. See also: Disk performance CAChe Disk stgpool parameter to say whether or not caching is in effect. Note that if you had operated CAChe=Yes and then turn it off, turning it off doesn't clear the cached files from the diskpool - you need to also do one of the following: - Fill the diskpool to 100%, which will cause the cached versions to be released to make room for the new files; or - Migrate down to 0, then do MOVe Data commands on all the disk volumes, which will free the cached images. Cache Hit Pct. Element of 'Query DB F=D' report, reflecting server database performance. The value is the percentage of requests for database pages that have been found in the database cache since the last time the TSM server was started or the last time a RESet BUFPool command was issued. (Also revealed by 'SHow BUFStats'.) The percentage is tuned by changing the BUFPoolsize server option such that it is as large as possible, without there being paging in the operating system. It is generally recommended that the value should be up around 98% or better - but my experience indicates that a 99% value does not bring better performance. (You should periodically do 'RESet BUFPool' to reset the statistics counts to assure valid values, particularly if the "Total Buffer Requests" from Query DB is negative (counter overflow).) If the Cache Hit Pct. value is significantly less, then the server is being substantially slowed in having to perform database disk I/O to service lookup requests, which will be most noticeable in degrading backups being performed by multiple clients simultaneously. Your ability to realize a high value in this cache is affected by the same factors as any other cache: The more, new entries in the cache - as from lots of client backups - the less likely it may be that any of those resident in the cache may serve a future reference, and so the lookup has to go all the way back to the disk-based database, meaning a "cache miss". It's all probability, and the inability to predict the future. Note: You can have a high Cache Hit Pct. and yet performance still suffering if you skimp on real memory in your server system, because all modern operating systems use virtual memory, and in a shortage of real memory, much of what had been in real memory will instead be out on the backing store, necessitating I/O (paging) to get it back in, which entails substantial delay. See topic "TSM Tuning Considerations" at the bottom of this document. See also: RESet BUFPool Cache Wait Pct. Element of 'Query DB F=D' report. Specifies, as a percentage, the number requests for a database buffer pool page that was unavailable (because all database buffer pool pages are occupied). You want the number to be 0.00. If greater, increase the size of the buffer pool with the server option BUFPoolsize (q.v.). You can reset this value with the 'RESet BUFPool' command. Caching in stgpool, turn off 'UPDate STGpool PoolName CAChe=No' If you turn caching off, there's no reason for TSM to suddenly remove the cache images and lose the investment already made: that stuff is residual, and will go away as space is needed. CAD See: Client Acceptor Daemon CadSchedName Windows Registry entry for CAD. Registry Path: 'SYSTEM\CurrentControlSet \Services\TSM Scheduler' Registry key: HKEY_LOCAL_MACHINE\SYSTEM \CurrentControlSet\Services \\Parameters Use 'dsmcutil list' to verify, and 'dsmcutil query /name:"TSM Client CAD"' Calibration Sensor 3494 robotic tape library sensor: In addition to the bar code reader, the 3494 accessor contains another, more primitive visions system, based upon infrared rather than laser: it is the Calibration Sensor, located in the top right side of the picker. This sensor is used during Teach, bouncing its light off the white, rectangular reflective pads (called Fiducials) which are stuck onto various surfaces inside the 3494. This gives the robot its first actual sensing of where things actually are inside. CANcel EXPIration TSM server command to cancel an expiration process if there is one currently running. This does NOT require the process ID to be specified, and so this command can be scheduled using the server administrative command scheduling utility to help manage expiration processing and the time it consumes. TSM will record the point where it stopped, in the TSM database: this sets a restart checkpoint for the next time expiration is run such that it will resume from where it left off. As such, this may be preferable to CANcel PRocess. Note, though, that resumption occurs with the remainder of the list that was previously built, and so that later expiration run may be quite short, as only the end of the old list is being operated upon. If you perform another EXPIre Inventory soon after the shortee finishes, you'll usually find that that one does not finish quickly: instead, it usually runs much longer, as in operating upon a fresh list. This restartability was introduced by ADSMv3 APAR IY00629, in response to issues with long-running Expirations. Issues: CANcel EXPIration is not a synchronous function. If you issue that cancel, then immediately try to start an Expire Inventory, the latter will fail, saying that one is already running. An observed instance had a cancel expiration performed, but for about a minute afterward, Query PRocess would show the expiration process still present, with: ANR0819I Cancel in progress Msgs: ANR0813I See also: Expiration, stop CANcel PRocess TSM server command to cancel a background process. Syntax: 'CANcel PRocess Process_Number' where the process number must be a simple integer, such as 1743 (it cannot be a locale-based representation, like 1,743). Notes: Processes waiting on resources won't cancel until they can get that resource - at which point they will go away. A cancel on a process awaiting a tape mount won't take effect until the tape is mounted *and* positioned. Another example is a Backup Stgpool process which is having trouble reading or writing a tape, and is consumed with retrying the I/O, cannot be immediately cancelled: where a drive has dropped ready or the like, performing a drive reset, or power cycle of it, often clears the stuck condition. Another example is a MOVe Data or reclamation which just started but was cancelled: it must get all the way to having its tapes mounted and positioned (the prevailing I/O operation) before the Cancel will take effect. The inability to have a cancelled process immediately terminate may be an architectural choice where it is desired that TSM's behavior be uniform across platforms and devices, so as to have uniform documentation et al. While it may be straightforward for a TSM module to terminate a TSM server I/O process in a limited set of circumstances, it might not be possible in other circumstances; and the programming to achieve it in one arena may be very different than in another. Then there is all the complexity of undoing the incomplete operation without losing any information, or duplicating any - which gets hairier as I/O becomes more dissociated, as with SAN. The developers probably looked into this in the past and decided to avoid putting their foot into it. When a process is canceled, it often has to wait for lock requests to clear prior to going away: SHOW LOCKS may be used to inspect. Cancellation of a tape process like Move Data typically results in the tape being left in its last-read/write position (depending upon the behavior of the tape technology) until eventual MOUNTRetention expiration or need of the drive by another process or session, such that if you re-entered the same MOVe Data command, the process could immediately pick up at the same point that it left off. CANcel REQuest *SM server command to cancel pending mount requests. Syntax: 'CANcel REQuest [requestnum|ALl] [PERManent]' where PERManent causes the volume status to be marked Unavailable, which prevents further mounts of that tape. CANcel RESTore ADSMv3 server command to cancel a Restartable Restore operation. Syntax: 'CANcel RESTore Session_Number|ALl' See also: dsmc CANcel Restore; dsmc RESTArt Restore; Query RESTore CANcel SEssion To cancel an administrative or client session. Syntax: 'CANcel SEssion [SessionNum|ALl]' A client conducting a dsm session will get an alert box saying "Stopped by user", though it was actually the server which stopped it. A client conducting a dsmc session will log msg ANS1369E and usually quit. Where a dsmc schedule process was started via CAD, the scheduler process should go away, and the dsmcad process remain. An administrative session which is canceled gets regenerated... adsm> cancel se 4706 ANS5658E TCP/IP failure. ANS5102I Return code -50. ANS5787E Communication timeout. Reissue the command. ANS5100I Session established... ANS5102I Return code -50. SELECT command sessions are a problem: depending on complexity of the query it quite possible for the server to hang, and Tivoli has stated that the Cancel may not be able to cancel the Select, such that halting and restarting the server is the only way out of that situation. Ref: Admin Guide, Monitoring the TSM Server, Using SQL to Query the TSM Database, Issuing SELECT Commnds. Msgs: ANS1369E, ANS4017E See also: THROUGHPUTTimethreshold; THROUGHPUTDatathreshold Candidates A file in the .SpaceMan directory of an HSM-managed file system, listing migration candidates (q.v.). The fields on each line: 1. Migration Priority number, which dsmreconcile computes based upon file size and last access. 2. Size of file, in bytes. 3. Timestamp of last file access (atime), in seconds since 1970. 4. Rest of pathname in file system. CAP cell In a StorageTek library, a Cartridge Access Port location. The CAP is the portal by which cartridges may be inserted or removed without disturbing library operation. There may be more than one CAP cell. When the robotics scan CAP cells on some STK libraries, scanning stops at the first empty CAP cell, and so any cartridges loaded after that position are not seen. Capacity Column in 'Query FIlespace' server command output, which reflects the size of the object as it exists on the client. Note that this does *not* reflect the space occupied in ADSM. See also: Pct Util Cartridge design points Cartridges are fabricated as two half-shells which are joined with a tape spool between them. The manner in which the two half-shells are joined derives from historic experience with audio cassette cartridges... Shells can be bonded by either welding of their plastic at their joining surfaces, or be screwed together. While welding may seem more "modern", it was learned that welding results in residual stresses from the heat involved, which can somewhat distort the cartridge and result in uneven strength characteristics. Premium audio cassettes were always screwed together, as this resulted in a strong bond with no heat, and thus fewer issues. Cartridge devtype, considerations When using a devclass with DEVType=Cartridge, 3590 devices can only read. This is to allow customers who used 3591's (3590 devices with the A01 controller) to read those tapes with a 3590 (3590 devices with the A00 controller). The 3591 device emulates a 3490, and uses the Cartridge devtype. 3590's use the 3590 devtype. You can do a Help Define Devclass, or check the readme for information on defining a 3590 devclass, but it is basically the same as Cartridge, with a DEVType=3590. The 3591 devices exist on MVS and VM only, so the compatibality mode is only valid on these platforms. On all other platforms, you can only use a 3590 with the 3590 devtype. Cartridge System Tape (CST) A designation for the base 3490 cartridge technology, which reads and writes 18 tracks on half-inch tape. Sometimes referred to as MEDIA1. Contrast with ECCST and HPCT. See also: ECCST; HPCT; Media Type CASE SQL operator for yielding a result depending upon the evaluation of an expression. Syntax: CASE expression WHEN expression1 THEN result1 ... [ ELSE default_result ] END Example: SELECT SUM(CASE WHEN collocate = 'YES' THEN EST_CAPACITY_MB ELSE 0) AS "Capacity for collocated stgpools" FROM STGPOOLS CAST SQL: To alter the data representation in a query operation: CAST(Column_Name AS ___) See: TIMESTAMP Categories See: Volume Categories Category code, search for volumes 'mtlib -l /dev/lmcp0 -qC -s ____' will report only volumes having the specified category code. Category code control point Category codes are controlled at the ADSM LIBRary level. Category code of one tape in library, Via Unix command: list 'mtlib -l /dev/lmcp0 -vqV -V VolName' In TSM: 'Query LIBVolume LibName VolName' indirectly shows the Category Code in the Status value, which you can then see in numerical terms by doing 'Query LIBRary [LibName]'. Category code of one tape in library, Via Unix command: set 'mtlib -l /dev/lmcp0 -vC -V VolName -t Hexadecimal_New_Category' (Does not involve a tape mount.) No ADSM command will performs this function, nor does the 3494 control panel provide a means for doing it. By virtue of doing this outside of ADSM, you should do 'AUDit LIBRary LibName' afterward for each ADSM-defined library name affected, so that ADSM sees and registers the change. In TSM: 'UPDate LIBVolume LibName VolName STATus=[PRIvate|SCRatch]' indirectly changes the Category Code to the Status value reflected in 'Query LIBRary [LibName]'. Category Codes Ref: Redbook "IBM Magstar Tape Products Family: A Practical Guide" (SG24-4632), Appendix A Category codes of all tapes in Use AIX command: library, list 'mtlib -l /dev/lmcp0 -vqI' for fully-labeled information, or just 'mtlib -l /dev/lmcp0 -qI' for unlabeled data fields: volser, category code, volume attribute, volume class (type of tape drive; equates to device class), volume type. (or use options -vqI for verbosity, for more descriptive output) The tapes reported do not include CE tape or cleaning tapes. In TSM: 'Query LIBVolume [LibName] [VolName]' indirectly shows the Category Code in the Status value, which you can then see in numerical terms by doing 'Query LIBRary [LibName]'. Category Table (TSM) /usr/tivoli/tsm/etc/category_table Contains a list of tape library category codes, like: FF00=inserted. (unassigned, in ATL) CBMR Cristie BMR, a circa 2003 approach to BMR, at least partially superceded in TSM 5.2 by ASR. CBT Changed Block Tracking, as in TSM/VE. CC= Completion Code value in I/O operations, as appears in error messages. See the back of the Messages manuals for a list of Completion Codes and suggested handling. CCW Continuous Composite WORM, as in a type of optical WORM drive that can be in the 3995 library. CD See also: DVD... CD for Backup Set See: Backup set, on CD CDB SCSI Command Descriptor Block, as a device driver utilizes in communicating with a SCSI library such as a 3584. CDL EMC Clariion Disk Library virtual tape product. CDP See: IBM Tivoli Continuous Data Protection for Files CDRW (CD-RW) support? Tivoli Storage Manager V5.1, V4.2 and V4.1 for Windows and Windows 2000 supports removable media devices such as Iomega JAZ, Iomega ZIP, CD-R, CD-RW, and optical devices provided a file system is supplied on the media. The devices are defined using a device class of device type REMOVABLEFILE. (Ref: Tivoli Storage Manager web pages for device support, under "Platform Specific Notes") With CD-ROM support for Windows, administrators can also use CD-ROM media as an output device class. Using CD-ROM media as output requires other software which uses a file system on top of the CD-ROM media. ADAPTEC Direct CD software is the most common package for this application. This media allows other software to write to a CD by using a drive letter and file names. The media can be either CD-R (read) or CD-RW (read/write). (Ref: Tivoli Storage Manager for Windows Administrator's Guide) CE (C.E.) IBM Customer Engineer. CE volumes, count of in 3494 Via Unix command: 'mtlib -l /dev/lmcp0 -vqK -s fff6' CELERRADump The special storage pool DATAFormat used to receive data backed up from EMC Celerra NAS file servers. (Contrast with NDMPDump, NETAPPDump.) Cell (tape library storage slot) For libraries containing their own supervisor (e.g., 3494), TSM does not know or care about where volumes are stored in the library, in that it merely has to ask the library to mount them as needed, so does not need to know. See: Element; HOME_ELEMENT; Library... SHow LIBINV Cell 1 See: 3494 Cell 1 Centera Storage device from EMC which provides retention protection for archiving fixed content digital data records. Supported starting in TSM 5.2.2. Requires that the Centera module be installed in the TSM server directory and be in place there when the TSM server is started, for Centera operating to work within the TSM server. Note that queries of Centera storage pools may return perplexing results. This is because the estimated capacity and utilization values are obtained from the Centera storage device itself and represents the total capacity and utilization of the Centera storage device. These values do not reflect the amount of data that the TSM Server has written to the storage pool. Customer observations: - The Storage pool that the Centera data is going to has to be large enough to hold all the raw data, plus the clips and the TOC. IBM Technotes: 1306924 Central Scheduling A function that allows an *SM (Central Scheduler; CS) administrator to schedule backup, archive, and space management operations from a central location. The operations can be scheduled on a periodic basis or on an explicit date. Shows up in server command Query STATus output as "Central Scheduler: Active". Controlled via the DISABLESCheds option. Changed Keyword at end of a line in client backup log indicating that the file changed as it was being backed up, as: Normal File--> 1,544,241,152 /SomeFile Changed Backup may be reattempted according to the CHAngingretries value. In the dsmerror.log you may see an auxiliary message for the retry: " truncated while reading in Shared Static mode." See also: CHAngingretries; Retry; SERialization CHAngingretries (-CHAngingretries=) Client System Options file (dsm.sys) option to specify how many additional times you want *SM to attempt to back up or archive a file that is "in use", as discovering during the first attempt to back it up, when the Copy Group SERialization is SHRSTatic or SHRDYnamic (but not STatic or DYnamic). Note that the option controls retries: if you specify "CHAngingretries 3", then the backup or archive operation will try a total of 4 times - the initial attempt plus the three retries. Be aware that the retry will be right after the failed attempt: *SM does not go on to all other files and then come back and retry this one. Option placement: within server stanza. Spec: CHAngingretries { 0|1|2|3|4 } Default: 4 retries. Note: It may be futile to attempt to retry, in that if the file is large it will likely be undergoing writing for a long time. Note: Does not control number of retries in presence of read errors. This option's final effect depends upon the COpygroup's SERialization "shared" setting: Static prohibits retries if the file is busy; Dynamic causes the operation to proceed on the first try; Shared Static will cause the attempt to be abandoned if the file remains busy, but Shared Dynamic will cause backup or archiving to occur on the final attempt. Note that the prevalence of retries is greater where client-server networking is slow, as a prolonged file backup increases the probability of the file being updated during the backup attempt. See also: Changed; Fuzzy Backup; Retry; SERialization CHAngingretries, query 'dsmc Query Options CHAngingretries' CHAR() TSM SQL function to return a string of optionally limited length, left-aligned, from the given expression. Syntax: CHAR(Expression[,Len]) Note that this does *not* correspond to the standard SQL CHAR() function, which converts an int ASCII code to a character, which in standard SQL is useful for generating ASCII characters from their numeric character set position, as for example char(10) yielding Line Feed. In standard SQL, the CHAR() function is the inverse of the ASCII() function (which does not exist in TSM). TSM's SQL implementation of CHAR() results in char(65) generating string "65" rather than the character 'A'. See also: DECIMAL(); INTEGER(); LEFT() Check Character An extra character on the barcode of certain tape types. Also known as the Checksum Character, it may for example be the modulus 43 sum of all the values in the alphanumeric barcode label. Checked in? A volume is checked in if it shows up in a 'Query LIBVolume' (as in doing 'Query LIBVolume * 001000') or in like SELECT * FROM LIBVOLUMES WHERE VOLUME_NAME='001000'". If it doesn't show up, then it's not checked in (the dsmadmc return code in both means of checking will be 11, which is RC_NOTFOUND). CHECKIn LIBVolume TSM server command to check a *labeled* tape into an automated tape library. Runs as a background process. (For 3494 and like libraries, the volume must be in Insert mode.) 'CHECKIn LIBVolume LibName VolName STATus=PRIvate|SCRatch|CLEaner [OWNer=""] [CHECKLabel=Yes|No|Barcode] [SWAP=No|Yes] [SEARCH=No|Yes|Bulk] [CLEANINGS=1..1000] [WAITTime=Nmins] [VOLList=vol1,vol2,vol3 ...] [DEVType=CARTridge|3590|3592]' (Omit VolName if SEARCH=Yes. You can do CHECKLabel=Barcode only if SEARCH=Yes.) Note that this command is not relevant for LIBtype=MANUAL. Note that SEARCH=Bulk will result in message ANR8373I, which requires doing 'REPLY Redirection character in the server administrative command line interface, within the NetWare console. load dsmc q file (CLIB_OPT)/>VOL:/fspaces.txt Case is sensitive for (CLIB_OPT)/>. The output file should be in 8.3 format length. The redirection works in NetWare console with load dsmc [...] (CLIB_OPT)/> {out_file} not in the dsmc interactive session. CLI vs. GUI See: GUI vs. CLI Client A program running on a file server or workstation that requests services of another program called the Server. Client, associate with schedule 'DEFine ASSOCiation Domain_Name Schedule_Name Node_name [,Node_name___]' Client, last activity 'Query ACtlog SEARCH=Client-Name' Client, prevent storing data on server There is no server setting for rendering a client node's accesses read-only, so that they can retrieve their data from the server but not store new data. However, from the client (or server cloptset) it can be done, via Excludes. See: Archiving, prohibit; Backups, prevent Client, register with server With "Closed registration" (q.v.) you request the ADSM server administrator to register your client system. With "Open registration" the client root user may register the client via the 'dsm' or 'dsmc' commands. To register with multiple servers, enter the command 'dsmc -SErvername=StanzaName', where StanzaName is the stanza in dsm.sys which points to the server network and port address. Ref: Installing the Clients. Client, space used on all volumes 'Query AUDITOccupancy NodeName(s) [DOmain=DomainName(s)] [POoltype=ANY|PRimary|COpy' Note: It is best to run 'AUDit LICenses' before doing 'Query AUDITOccupancy' to assure that the reported information will be current. Client Acceptor Daemon (CAD; dsmcad) A.k.a. TSM Client Acceptor Daemon/Service: A Backup/Archive client component to manage the TSM scheduler process, the Remote Client Agent daemon, and to host the Web Client, all per the port number specified via the HTTPport client option (which defaults to 1581). Runs 'dsmc schedule' as a child process when a schedule "pops". Module: dsmcad Installer component: TSM Client - Backup/Archive WEB Client which is part of the client install package. Facility introduced in TSM 4.2 to deal with the design behavior of the client scheduler to retain all the memory it has acquired for its various process servicing, so as to reserve the resources it predictably will need for the next such scheduled task. (This is sometimes disparaged as a "memory leak" when it is merely retention.) The developers realized that while some client systems can sustain a process which reserves that much memory, others cannot do that and handle their other workloads as well. The CAD allows only a low-overhead process to persist in the client system, to respond to the server in processing schedules. The CAD will invoke the appropriate client software function (e.g., dsmc), and allow that client software module to go away when it is done, thus releasing memory which other system tasks need. CAD also serves to start the Web Client. Operation is governed by the MANAGEDServices client option. Note that nothing about PRESchedulecmd or POSTSchedulecmd elements is relevant to CAD or the starting of CAD: those come into play only when dsmc schedule is run by CAD. It is important to realize how dsmcad starts: To run as a daemon, it first starts as an ordinary process, then forks a copy of itself into the background whereupon that dissociated process becomes the daemon and the original process ends. Because of this daemon transition action, if dsmcad is started from /etc/inittab, the third (Action) field of the inittab line should *not* contain "respawn": it should contain "once". This keeps init from responding to the daemon transition by starting a replacement process - which can result in port contention and a start-loop. Operation involves a timer file, through which is passed info about the next scheduler. Note that as part of its initialization, dsmcad changes its current working directory (cwd) to /. It also creates a random-name tracking file (in Unix, in /tmp, name like "aaa22qmEa", as from the mktemp() system call; in Windows, in the TSM client logs directory, name like "s2i4") when it runs the scheduler, which will contain like: EXECUTE PROMPTED 59301 with no newline at end (thus, file sizes like 22 or 23 bytes), where the number is a TCP Listen port in the dsmc child which dsmcad starts. The scheduler process will thus look like the following in a 'ps' output: /usr/bin/dsmc schedule /tmp/aaa22qmEa -optfile=/usr/tivoli/tsm/client/ba/ bin/dsm.opt A side advantage of the CAD approach is that CAD does not observe client options which govern backup/archive operations: it is the client schedule process which CAD starts which observes such client processing options, and thus you can change the client options file and have the changes be in effect the next time the scheduler process runs. CAD does not have to be restarted when the client options file is changed to adjust backup/archive options. However, CAD does observe some client options, such as its MANAGEDServices and some networking options: CAD certainly is dependent upon networking parameters, just as the scheduler itself is. Be aware that dsmcad will merrily ignore operands on its invocation command line, so don't expect it to issue errors about command line nonsense. Multiple instances of CAD can be run on a single system - see the client manual. Port number: The HTTPport value controls the Web Client port number for the Client Acceptor (dsmcad). The WEBports option controls the scheduler side's port number (oddly enough). Logging: Expect CAD-specific messages to appear in the dsmwebcl.log, in the client directory, unless you altered that via the ERRORLOGName option or DSM_LOG environment variable. (Command line redirection does not work, to have them logged elsewhere.) Scheduler- specific messages will continue to appear in the scheduler log, where you will know that CAD is in control via scheduler log messages: Scheduler is under the control of the TSM Scheduler Daemon Scheduler has been started by Dsmcad. Scheduler has been stopped. where that last message reflects the normal, momentary running of dsmc schedule when CAD starts, for it to get its bearings on schedule times. Ref: 4.2 Technical Guide redbook; B/A Clients manual See also: HTTPport; MANAGEDServices; Remote Client Agent; Scheduler; Web Client; WEBports Client Acceptor Daemon, Web client When the dsmcad process detects an incoming WebClient session, it will start the dsmagent process or thread. This thread/process will bind to the TCPIP ports defined by the WEBPORTS option. The ports are held until this dsmagent thread/process has ended. This represents serialized service of Web sessions. It has an automatic timeout period of about 20 minutes. Client Acceptor Daemon and SCHEDMODe CAD does work with SCHEDMODe POlling and SCHEDMODe PRompted. Expect more immediacy with PRompted rather than POlling, however. Client access, disable 'DISAble' Client access, enable 'ENable' Client activity, report See: Client session activity, report Client component identifiers AIX prefix = AD1. See /usr/lpp/adsm/bin/README for full list. Client compression See: Compression Client CPU utilization high Typically due to the use of client compression. See also: Backup performance; Restoral performance Client directory, ADSM AIX: /usr/lpp/adsm/bin/ Client directory, TSM AIX: /usr/tivoli/tsm/client/ /usr/tivoli/tsm/client/ba/bin/ Linux: /opt/tivoli/tsm/client/ Windows: c:\program files\tivoli\tsm\baclient Client files, maximum in batch "MOVEBatchsize" definition in the transaction, define server options file. Default: 32 Client install, automate In sites where a substantial number of clients need to be installed, some form of automating that needs to be devised. You will find IBM site references to this as term "silent install procedure". One tricky aspect of such installs is how to seed the client password, where PASSWORDAccess Generate is in effect. This can be effected via client cmd: dsmc SET Password IBM Technotes: 1177576 Client installs and upgrades ADSM & TSM client install and upgrade packages have always been self-contained meaning that each provides all the ingredients to plant a fully functional client. There is never any requirement to install some "base level" first. Client IP address See: IP addresses of clients Client level, highest for AIX _._ See: AIX client: AIX levels supported Client level, revert to lower level Don't! If you want to try out a new client level, do it on a test system: DO NOT capriciously try like TSM 5.4 for a few days, and then uninstall it and reinstall 5.3. See the ANR0428W message explanation elsewhere in this document for background. Using a higher level client may well change the structure of control information involving data stored in the TSM server, as well as the journal-based backups database on the client. Client level compatibility As client software evolves, it introduces new features which require changes in the control information and possibly the format of the data as stored on the server. Obviously, an older client cannot understand advanced content like this, which is beyond its programming. See also: API; msgs ANS1245E, ANS4245E Client level vs. server level It is much more difficult to advance the TSM server software level than it is on clients, and so the question arises as to how disparate the client and server software levels can be. Over the life of the older ADSM product, it was the case that basically any client level would work with any server level. With the TSM product, though, we learn that they should not be more than one level different. For example, if your TSM client level is 4.1, you can operate with a TSM 4.1 or 3.7 server, but no lower server level is supported. Important: When advancing a client level, you cannot go back after you start using it. New client levels contain new features and change the way in which they store client data on the server - which would be unrecognized by a lower level client. Specifics can be found in: - The Backup/Archive Clients manual, chapter 1, under "Migrating from earlier versions". - The README file that comes with the client software, in its section "Migration Information" Client log, access from server The server administrator may want to have access to the client log. An elegant way to avoid needing password access is to define a client schedule to copy the dsmsched.log files from the client to the server: 'def sch DOMAIN GETLOG act=c obj="rcp dsmsched.log admin@server:client.log"' Client messages in server Activity This is Event Logging, resulting in ANE* Log messages in the Activity Log. See also: ANE Client Name In 'Query SEssion' output, identifies the name of the client node conducting the session. Note that if the operation is being performed across nodes, on a node of another name via the -VIRTUALNodename parameter, the name reflected will be the name specified in that parameter, not the natural name of the node performing the action. Client node A file server or workstation on which *SM has been installed and that has been registered with an *SM server. A node can only belong to one domain. You can register a second node to a different domain. Client node, register from server 'REGister Node ...' (q.v.) Be sure to specify the DOmain name you want, because the default is the STANDARD domain, which is what IBM supplied rather than what you set up. There must be a defined and active Policy Set. Note that this is how the client node gets a default policy domain, default management class, etc. Client node, reassign Policy Domain 'UPDate Node NodeName DOmain=DomainName' Node must not be currently conducting a session with the server, else command fails with error ANR2150E. Client node, remove from server 'REMove Node NodeName' Client node, rename in server 'REName Node NodeName NewName' Client node, update from server 'UPDate Node ...' (q.v.) Node must not be currently conducting a session with the server, else command fails with error ANR2150E. Client node name Obtained in this order by the client software (e.g., dsmc): 1. The 'gethostname' system call (this is the default) The owner of the files is that of the invoker. 2. The nodename from the dsm.sys file The owner of the files is that of the invoker. 3. The nodename from the dsm.opt or from the command line (i.e. dsmc -NODename=mexico) This option is meant to be temporary pretend - it requires the user enter a password even if password generate is indicated in the dsm.sys file. This mode does NOT use the login id for ADSM owner. Instead it gives access to all of the files backed up under this nodename - i.e. virtual root authority. The 'virtual root authority' is why there is a check to prevent the nodename entered being the same as the 'gethostname'. Client node policy domain name, query 'Query Node' shows node name and the Policy Domain Name associated with it. Client nodes, query 'Query Node [F=D]' Reports on all registered nodes. Client operating system Shows up in 'Query Node' Platform. Client Option Set ADSMv3+ concept, for centralized administration of client options. Via DEFine CLIENTOpt, the centralized client options are defined in the server, and are associated with a given node via REGister Node. On the *SM server, its administrator can use the Force operand to force the server-specified, non-additive options to override those in the client. This is to say that Force=Yes works for singular options, like COMPRESSAlways, but not for multiple options like INCLEXCL and DOMain, which are "additive" options: every definition adds to the collection of such definitions. It is implied that additive options specified in a Client Option Set cannot be overridden at the client, according to one IBM developer (though the manuals fail to say what happens). In terms of processing order, server-defined additive options logically precede those defined in the client options file: they will always be seen and processed first, before any options set in the client options file. The Client Option Set is associated with a node via the 'REGister Node' command, in operand CLOptset=OptionSetName. Example of server command defining an IncludeExclude file: DEFine CLIENTOpt OptionSetName INCLEXCL "include d:\test1". Note that the Include or Exclude must be in quotes. If your path name also has quotes, then use single quotes for the outer pair. Include and Exclude definitions are in the specified file. The B/A client will recongnize the need to parse for the include or exclude inside of the quotes. Use 'dsmc query/show inclexcl' to reveal the mingling of server-defined Include/Exclude statements and those from the client options file. Note that a client version earlier than V3 knows nothing of Client Option Sets and so server-defined options are ineffective with earlier clients. Server-based client option set changes are handed to the client scheduler when it runs, which is to say that cloptset changes in the server do not necessitate restarting the client scheduler process. (This has been verified by experiences.) Management of a cloptset is facilitated by the TSMManager package, which provides a GUI-based copy and update facility. Ref: Redbook Getting Started with Tivoli Storage Manager: Implementation Guide, SG24-5416 See also: DEFine CLIENTOpt Client Option Set, associate 'UPDate Node NodeName CLOptset=Option_Set_Name' Client Option Set, dissociate 'UPDate Node NodeName CLOptset=""' Client Option Set, define 'DEFine CLOptset ______'. Client Option Set, query Do 'Query Node ______ F=D' and look for "Optionset:" to determine which option set is in effect for the client, and then do 'Query CLOptset OptionSetName' Client options, list 'dsmc Query Options' 'dsmmigquery -o' Client options, order of precedence Per doc APAR PQ54657: 1. Options entered on a scheduled command. 2. Options received from the server with a value of Force=Yes in the DEFine CLIENTOpt. The client cannot override the value. (Client Option Sets) 3. Options entered locally on the command line. 4. Options entered locally in the client options file. 5. Options received from the server with a value of Force=No. The client can override the value. 6. Default option values. Client options, settable within server Do 'help define CLIENTOpt' to see. (ADSMv3) Client options file In Windows, it is initially located in c:\program files\tivoli\tsm\baclient . Environment variable DSM_CONFIG can be used to point to the file. See: Client system options file; Client user options file Client OS Level Report element from Query Node with Format=Detailed. It may report the basic level of the operating system, but not its detailed level. For example, for an AIX platform it may report Client OS Level: 5.2 rather than like 5.2.1.4 . This is because the level is obtained via a basic system call, such as uname() in Unix, and os() in Windows. Client password, change from client dsmc SET Password In HSM: 'dsmsetpw' Client password, encryption type Currently, DES-56 is used. Client password, where stored on See: Password, client, where stored on client Client password file See: TSM.PWD Client performance factors - Make sure that you don't install any software not needed on the machine. For example, some AIX and Solaris customers install everything that comes in the client package - including HSM, which results in it always running without their knowledge, taking up system resources. - Turn off any features that are unused and which may sap processing power. On a Macintosh, for example, turn off AppleTalk if it is unused. See also: Backup taking too long; Restoral performance; Server performance Client polling A client/server communication technique where the client node queries the server for scheduled work, as defined by the 'SCHEDMODe POlling' option in the Client System Options file (dsm.sys), and a frequency as defined via the "QUERYSCHedperiod" option. It is in this mode that "Set RANDomize" can apply. Contrast with "PRompted" type. Client schedule See: DEFine CLIENTAction; DEFine SCHedule, client; NT; Schedule, Client; SET CLIENTACTDuration; Weekdays schedule, change the days Client schedule, contact frequency The ADSM server attempts to contact each specified client in sequence, giving each up to 10 seconds to respond before going on to the next in the list. Client schedule, one time See: DEFine CLIENTAction Client schedule, why use? Rather than use TSM client scheduling to perform regular system backups, you could just as easily use host-based scheduling mechanisms, such as Unix cron. So, why use TSM client scheduling? The simple answer is that it provides better site management capability. Consider that with TSM scheduled backups you can issue the TSM server command Query EVent and immediately survey the state of all site backups. Client schedule associations, query 'Query ASSOCiation [[DomainName] [ScheduleName]]' Client schedule implementation advice Too many administrators set up a schedule, walk away, and simply expect it to work - with no testing. Then they are bewildered when it doesn't work. Here's some advice in that area... When first implementing a schedule, give it a basic try first by setting SCHEDMODe PRompted in your client, and then use DEFine CLIENTAction on the server to kick off a basic OS Command, such as reporting the date/time. Verify execution in the schedule log. This will assure that the overall mechanism is functional. Then you can set SCHEDMODe as needed for real and do the DEFine SCHedule. And don't forget DEFine ASSOCiation. If your environment has restricted networking and/or firewalls, you must follow the TSM guidelines for setting up a schedule under such conditions. And keep in mind that small, capricious changes in network rules can easily keep your schedule from working. If adding a PRESchedulecmd or POSTSchedulecmd function to your scheduler regimen, keep in mind that such private software can easily foul up a schedule if misconstructed - and that it is fully your responsibility to debug problems therein; so include thorough error handling. If you still have problems, refer to the Problem Determination Guide, which contains a section on pursuing scheduler problems. Key to resolution is inspection of the schedule log and the dsmerror.log, and the server Activity Log. A schedule and an intereactive session are two very different things. Just because you can successfully invoke a basic dsmc does not mean that a schedule will work. Client scheduler See: Scheduler, client Client schedules, disable See: DISABLESCheds See also: DISAble SESSions Client schedules, results, query 'Query EVent DomainName ScheduleName' to see all of them. Or use: 'Query EVent * * EXceptionsonly=Yes' to see just problems, and if none, get message "ANR2034E QUERY EVENT: No match found for this query." Client schedules, see from server? Clients don't have TCP/IP ports open until the schedule comes due, so one can't bounce off those ports to determine existence. The closest thing might be the SHOW PENDING server command, unless a comparable SQL query could be formulated. But even then, that's an *expected* client presence, not current actual. This may require some alternate access to the client to look for the ADSM client process. On NT, you can use server manager to view the status of the scheduler service on all your NT clients. It can also used to start, stop, or disable the service, providing you have the proper authority and the schedules are running as services on the clients. Client/server A communications network architecture in which one or more programs (clients) request computing or data services from another program (the server). Client session, cancel at server 'CANcel SEssion NN' Client session, terminate at client See: dsmc, interrupting Client session activity, report There are a few choices: - Activate TSM server accounting and report from the accounting log records; - Perform an SQL Select on the Summary table. More trivially, you can report on the number of bytes received in the last session with the client...which may be any kind of session (even querying): SELECT NODE_NAME,LASTSESS_RECVD AS \ NUM_OF_BYTES,LASTACC_TIME FROM NODES \ ORDER BY 2 DESC Client sessions, cancel all 'CANcel SEssion all' Client sessions, limit amount of data There is no feature in the product which would allow the server administrator to limit the amount of data that a client may send in a session or period of time (one day), as you may want to do to to squelch wasteful Backups or looping Archive jobs. See also: "Quotas" on storage used Client sessions, limit time There is no obvious means in TSM to limit client session lengths, to cause a timeout after a certain time. As a global affector, you could try playing with the THROUGHPUTTimethreshold server option, specifying a high THROUGHPUTTimethreshold value, which your clients could not possibly achieve, thus causing any client session lasting more than THROUGHPUTTimethreshold minutes to be cancelled automatically. Client sessions, maximum, define "MAXSessions" definition in the server options file. This is a limiter for all nodes, not just one. TSM is historically deficient in affording the administrator no means of limiting the number of sessions by node. Client sessions, maximum, get 'Query STatus' Client sessions, multiple See: RESOURceutilization Client summary statistics Those ANE* messages in the TSM server Activity Log, summarizing client sessions. See also: ANE; CLIENTSUMMARYSTATISTICS Client System Options File File dsm.sys, used on UNIX clients, that contains a number of processing options which identify the list of TSM servers which may be contacted for services, communication, authorization, central scheduling, backup, archive, and space management options. The file is maintained by the root user on the client system. The philosophy is that for multi-user systems (e.g., Unix) that there should be a system options files (this one) and a user options file (dsm.opt) to supplement, the latter giving individual users the ability to tailor sessions within the confines of the global dsm.sys. This is in contrast to single-user systems like Windows, where only a single options file is needed. The ADSM 'dsmc Query Options' or TSM 'show options' reveals current values and validates content. ADSM AIX: /usr/lpp/adsm/bin/dsm.sys ADSM IRIX: /usr/adsm/dsm.sys TSM AIX: /usr/tivoli/tsm/client/ba/bin/dsm.sys Mac OS X: "TSM System Preferences" file in folder /Library/Preferences /Tivoli Storage Manager (The install directory provides a sample file.) Note that TSM products which make use of the TSM API, such as the TDPs, depend upon the dsm.sys in the TSM API directory. The DSM_DIR environment variable can be used to point to the directory containing this file (as well as 'dsmc' and associated resource files). See also Client User Options File. APAR IC11651 claims that if PASSWORDAccess is set to Generate in dsm.sys, then dsm.opt should *not* contain a NODE line. See also: Client User Options file Client thread types Main Handles common housekeeping tasks: - Performs general system initialization - Parses commands - Processes options - Performs authentication with the TSM server - Policy set retrieval - Creates the producer thread - Creates the performance monitor thread - Queues up file specifications to be processed by the producer thread - Output status of backup to screen Signal Waiting Captures signals for the command line client: - Thread-level "Trap-style" signals: - Invalid memory references - Floating point errors - Illegal instructions - etc. - Process-level signals: - CTRL-C - CTRL-BREAK Producer - The "front end" for TSM processing - Starts a consumer thread - Retrieves file specifications queued up the the main thread - Queries the TSM server to obtain the Active Files inventory and traverses the file system seeking backup candidate objects. - Queues transactions (e.g., list of backup candidates) to be processed by the consumer thread(s) Consumer - The "back end" for TSM processing - Checks the txn (transaction) queue to see if there is work to do. - Handles File I/O, reading the object from the client disk. - Compresses the data (if applicable) - Encrypts the data (if applicable) - Sends and commits the data to the TSM server - Uses same session throughout backup Performance Monitor Attempts to optimize performance by balancing thread usage (within the constraints of the RESOURceutilization setting). Checks that the Consumer thread is keeping up with work provided by the Producer. If the Producer is getting ahead of the Consumer (queue overrun), the PM may start another Consumer thread. Conversely, if the Producer is bogged down in a file system and there are more file systems to be processed, another Producer thread may be started. And PM will shut down Producers and Consumers for which there is insufficient work to do. Ref: Frank Ramke's "An Inside Look at the Tivoli Storage Manager Client, Part 2 of 2" Note that Producer, Consumer thread statistics can be seen by turning on in some TDPs (e.g., Domino) through the use of the "statistics" option. See also: Consumer; Producer; RESOURceutilization Client threads See: Multi-session Client Client tracing See: Tracing client Client upgrade notes Typically, the rule in effect when upgrading a client is that once you go to a new client level and use it, you cannot go back. New TSM clients extend the table entries stored in the server database, as associated with the data they send to server storage pools, and that prohibits use with earlier clients, which can't understand the revised tables. Note that in an upgrade, it is never normally necessary for the TSM client to reprocess any of the older data. See also: Client-server compatibility Client User Options file File dsm.opt, used on UNIX clients, containing options that identify the ADSM server to contact, specify backup, archive, restore, retrieve, and space management options, and set date, time, and number formats. Either locate in /usr/lpp/adsm/bin or have the DSM_CONFIG client environment variable point to the file, or specify it via -OPTIFILE on the command line. See client system options file. APAR IC11651 claims that if PASSWORDAccess is set to Generate in dsm.sys, then dsm.opt should *not* contain a NODE line. See also: Client System Options file Client versions/releases, list SELECT CLIENT_VERSION as "C-Vers", - CLIENT_RELEASE AS "C-Rel", - CLIENT_LEVEL AS "C-Lvl", - CLIENT_SUBLEVEL AS "C-Sublvl", - PLATFORM_NAME AS "OS" , - COUNT(*) AS "Nr of Nodes" FROM NODES - GROUP BY CLIENT_VERSION,CLIENT_RELEASE, CLIENT_LEVEL,CLIENT_SUBLEVEL, PLATFORM_NAME ---------------- SELECT NODE_NAME AS "Node", - CLIENT_VERSION AS "C-Vers", - CLIENT_RELEASE AS "C-Rel", - CLIENT_LEVEL AS "C-Lvl", - CLIENT_SUBLEVEL AS "C-Slvl", - PLATFORM_NAME AS "OS" - FROM NODES Client-side deduplication New in TSM 6.2, for the client to be responsible for deduplication duties, as opposed to the TSM server, as in 6.1. Is based upon the dedup information in the storage pool where the data resides on the server, which has to be a FILE devclass storage pool, and deduped. Client side deduplication has logic in making the client responsible for that functionality. As with client-side compression, that helps reduce the amount of data ultimately transmitted to the TSM server, but likewise requires that the client have the processing resources to do the deed. And, the client administrator has to deal with more options in order to implement this. Obviously, as with sub-file backup, the full data content has to be sent to the TSM server at some point for full file recovery to be possible in the future. Unlike client compression, client-side dedup involves communication with the TSM server as to whether it has a given extent, unless the complexity of a special client cache is undertaken. An advantage to client-side dedup is that it can occur regardless of the storage pool devclass (server-side dedup is limited to FILE devclass) - if I'm correctly inferring this from the doc. One thing that is not explained in the doc is whether client-side dedup is relative only to that client's data (which the client dedup cache tends to suggest)...and that in turn tends to suggest that server-side dedup is more global, relative to all data in its storage pool. An overall drawback to use of dedup is that it cannot be used with encryption, which thus prohibits the use of dedup in many environments. Beyond TSM, one should consider hardware-based dedup, as provided with newer VTLs and similar devices. While client-side dedup is very new, some caution is warranted until it's mature and fully proven. (There be APARS.) I'm concerned that this stuff is getting inordinately complex, not just for the customer, but the developers as well, where programming lapses could result in a product calamity where it's discovered that data is being lost - which would likely occur long after implementation. CLIENT_HLA SQL NODES table entry for the High Level Address of the client, being the network address (IP address, in the case of TCP/IP). Corresponds to the Query Node field "High-level Address". May be set for server-initiated sessions (V5.2 and higher clients). Alternately, may be the network address of the storage agent. See: HLAddress CLIENT_LLA SQL NODES table entry for the Low Level Address of the client, being the client's port number. Corresponds to the Query Node field "Low-level Address". See: LLAddress Client-server compatibility See the compatibility list at the front of the Backup/Archive Client manual, chapter 1, under "Migrating from Earlier Versions", "Upgrade Path for Clients and Servers". IBM Technotes: 1053218 ClientNodeName Windows Registry value, planted as part of setting up TSM Client Service (scheduler), storing the TSM node name - which may or may not be equal to the computer name. Corresponds to the TSM API value clientNodeNameP (q.v.). clientNodeNameP TSM API: A pointer to the nodename for the TSM session. All sessions must have a node name associated with them, and this sets it for API interactions. The node name is not case sensitive. This parameter must be NULL if the PASSWORDAccess option in the dsm.sys file is set to generate. The API then uses the system host name. Clients, report MB and files count SELECT NODE_NAME, SUM(LOGICAL_MB) AS - Data_In_MB, SUM(NUM_FILES) AS - Num_of_files FROM OCCUPANCY GROUP BY - NODE_NAME ORDER BY NODE_NAME ASC CLIENTSUMMARYSTATISTICS Undocumented TSM server option, where a value of OFF will turn off client summary statistics (ANE* messages). Changing this value requires restarting the TSM server. CLIOS CLient Input Output Sockets: a protocol like TCPIP that you can use to communicate between MVS and AIX. In a nutshell it is a faster protocol than TCP/IP. You have to specifically set up ADSM on MVS and your AIX machine to take advantage of CLIOS - in other words it does not get set up by default. CLNU The first four characters of the LTO Ultrium Universal Cleaning Cartridge sticker. See: Universal Cleaning Cartridge Clock IBM Technotes: 1153685 See also: ACCept Date; Daylight Savings Time (DST) Clopset The ADSM 'dsmc Query Options' or TSM 'show options' command will show merged options. Remove a cloptset from a node via: UPDate Node ____ CLOptset="". See: Client Option Set Closed registration Clients must be registered with the server by an ADSM administrator. This is the installation default. Can be selected via the command: 'Set REGistration Closed'. Ref: Installing the Clients Contrast with "Open registration". "closed sug" IBM APAR notation meaning that the customer reported what they believe to be a functionality problem, but which IBM regards as not a problem: IBM will take the report under advisement as a "suggestion" for possible incorporation into future software changes. Have a nice day. Cluster TSM internal terminology referring to the portion of a filespace belonging to a single client that is on a storage pool volume. More specifically, it is the data belonging to a specific combination of NodeId (cluster key 1) and FilespaceId (cluster key 2). Referenced in message ANR1142I as TSM performs tape reclamation. See also: Bitfile; Reclamation Clustering (Windows) See manual "TSM for Windows Quick Start" Appendix D - "Setting up clustering". CLUSTERnode, -CLUSTERnode= AIX and Windows client option to specify whether TSM is responsible for managing cluster drives in an AIX HACMP or Microsoft Cluster Server (MSCS) environment. For info on how to configure a cluster server, refer to the appropriate appendix in the client manual. Specify: Yes You want to back up cluster resources. No You want to back up local disks This is the default. The client on which you run the backup must be the one which owns the cluster resources, else the backup will not work. When CLUSTERnode Yes is in effect, the cluster name is used to generate the filespace name. However, it is not derived from the /clustername:xxxx option in the service definition. Instead, the client gets the cluster name via the Win32 API function GetClusterInformation(). The reason you need to specify it when running dsmcutil.exe is because that utility can also be used to configure remote machines. Figuring out the local cluster name is easy, but figuring out the cluster name for a remote machine is a little more difficult: the NT client may in the future be able to do this. CM Cartridge Memory, as contained in the LTO Ultrium and 3592 tape cartridges. Contained in the CM is an index (table of contents) to the location of the files that have been written to the tape. If this becomes corrupted (as happened with bad LTO drive firmware in late 2004), the drive has to find files by groping its way through the tape, which severely degrades performance. See: 3592; LTO Code 39 Barcode as used on the 3590 tape: A variable length, bi-directional, discrete, self-checking, alpha-numeric bar code. Code 39 encodes 43 characters; zero through nine, capital "A" through capital "Z", minus symbol, plus symbol, forward slash, space, decimal point, dollar sign and percent symbol. Each character is encoded by 9 bars, 3 of which are always wide. Code 39 was the first alphanumeric symbology to be developed. It is the most commonly used bar code symbology because it allows numbers, letters, and some punctuation to be bar coded. It is a discrete and variable length symbology. Every Code 39 character has five bars and four spaces. Every character encodation has three wide elements and six narrow elements out of nine total elements, hence the name. Any lower case letters in the input are automatically converted to upper case because Code 39 does not support lower case letters. The asterisk (*) is reserved for use as the start and stop character. Bar code symbols, which contain invalid characters, are replaced with a checked box to visually indicate the error. The Code 39 mod 43 symbol structure is the same as Code 39, with an additional data security check character appended. The check character is the modulus 43 sum of all the character values in a given data string. The list of valid characters for the Code 39 bar code includes: The capital letters A to Z The numbers 0 to 9 The space character The symbols - . $ / + % Cold Backup Refers to the backup of commercial database (e.g. Oracle) file system elements via the standard Backup/Archive client when the database system is shut down (cold). This is in contrast to using a TDP for backup of the database when the database system is up and alive. Collocate a single node (belatedly) Some time after a storage pool has been used in a non-collocated manner you want to have one node's existing data all collocated (as contrasted with simply having new data or reclaimed volume data collocated on a go-forward basis). Possible methods: 1. Export the node's data, get the original node definitions out of the way, redefine the stgpool as collocated, then import. 2. Identify all volumes containing that (and inevitably, other nodes) data, create a temp collocated stgpool, do MOVe Data of the co-mingled volumes into it (which collocates the data for all nodes in the data), then MOVe Data all other node volumes out of there, resulting in the temp stgpool containing data only belonging to the node of interest. Collocation A STGpool attribute for sequential access storage pools (not disk). Historically, *SM would by default store data from multiple clients at the end of the serial storage medium most recently written to, where the co-mingling would minimizes mounts while maximizing volume usage. This is no collocation. Economy at backup time, however, makes for expense at restoral time in that each node's data is now more widely scattered over multiple volumes. As of TSM 5.3, collocation by group is a new capability; and if you do not define a group, the arrangement effectively defaults to collocation by node. (So, the old lesson: never take defaults - always specify what you want.) Collocation shifts the relative economics by keeping files together for a given node, or node filespace, making restoral quicker at the expense of more media mounts and more incompletely-used volumes. Collocation off: co-mingled data, fewer tapes, fewer tape mounts, longer time to restore multiple files of one client. Via: 'UPDate STGpool PoolName COLlocate=No' Collocation on: tapes dedicated by client, more tapes, more tape mounts, shorter time to restore multiple files of one client. Via: 'UPDate STGpool PoolName COLlocate=Yes' By filespace: Keep files for each node, and filespace within node, together in separate tape sets. 'UPDate STGpool PoolName COLlocate=FILespace' By group: Keep files for a group of nodes together in separate tape sets. 'UPDate STGpool PoolName COLlocate=GROUP' Default: No collocation Note: If there are fewer tapes available than clients or filespaces, *SM will be forced to mingle them: Collocation is a request, not a mandate. Note that there is no "Wait=Yes" provided with this command. Advisory: Approach Collocation cautiously where the tape technology has inferior start-stop performance (i.e., traditional DLT): the performance will likely be unsatisfactory with all the repositioning, and you may see a lot of "false" cleans on the drives, with tapes that cannot be read back, lots of RSR errors, etc. With collocation, MOUNTRetention should be short, to keep the increased number of mounts from waiting for dismounts. Related expense: BAckup STGpool will run longer, as more primary storage pool volumes are involved. Also, a collocated primary tape which is updated over a long period of time will mean that its data is spread over *many* tapes in the non-collocated copy storage pool, which can make for painfully lengthy Restore Volume operations. Collocation is usually not very useful for archive data because users do not usually retrieve a whole set of files. Notes: There are a few cases where *SM will need to start an fresh "filling" collocation volume. One obvious case is when reclamation and migration are running at the same time. If reclamation is writing to an output volume that migration also wants to write to, the migration process won't wait around for it. Instead it will choose a new tape. Another case involves reclamation and files that span tapes, when it will possiblity create another "filling" volume. Collocation works when sufficient volumes are available to accommodate its needs. If volumes are constrained, collocation may not happen. See Admin Guide topic "How the Server Selects Volumes with Collocation Enabled" for its volume selection rules. Ref: IBM site Technote 1112411 Collocation, changing Changing the STGpool collocation value does not affect data previously stored. That is, *SM does not suddenly start moving data around because you changed the setting. Ref: Admin Guide, Turning Collocation On or Off Collocation, query 'Query STGpool [STGpoolName] Format=Detailed', look for "Collocate?", about halfway down. Collocation, transferring from Transferring data from a non-collocated non-collocated space tape to a collocated tape can be very slow because the server makes multiple passes through the database and the source volume. This allows files to be transferred by node/filespace without requiring excessive tape mounts of the target media. Collocation and backup For backup of a disk storage pool, a process backs up all the files for one node before going on the the next node. This is done regardless of whether the target copy pool is collocated. For backup of sequential-access primary pools, a backup process works on one primary tape at a time. If the target copy pool is collocated, the backup process copies files on the source tape by node/filespace. This means that if you are backing up a non-collocated primary pool to a collocated copy pool, it may be necessary to make multiple passes over the source tape. Collocation and database backups Collocation is often pointless - and even counterproductive - for large databases. Given the size of a database backup (as via a Data Protection agent), it tends to occupy a large portion of a tape, effectively enforcing a kind of implicit collocation. It is also the case that whereas a database backup is often far more important to business continuity than other kinds of files, it is beneficial not to have them clustered together on the tape tape volumes, but rather dispersed. Another point here is that the DP may employ Data Striping: the data will be kept on separate tapes, and during restoral the tapes will be mounted and used in parallel. Collocation and 'MAXSCRatch' ADSM will never allocate more than 'MAXSCRatch' volumes for the storage pool: collocation becomes defeated when the scratch pool is exhausted as ADSM will then mingle clients. When a new client's data is to be moved to the storage pool, ADSM will first try to select a scratch tape, but if the storage pool already has "MAXSCRatch" volumes then it will select a volume as follows: For collocation by node: select the tape with the lowest utilization in the storage pool. For collocation by filespace: first try to fall back to the collocation-by-node scheme by selecting a volume containing data from the same client node; then select the tape with the lowest utilization in the storage pool. Ref: Admin Guide, chapter on Managing Storage Pools and Volumes, topic How the Server Selects Volumes with Collocation Enabled See also: MAXSCRatch Collocation and offsite volumes Collocation is usually not used for offsite volumes. Reasons: - Means fewer filled tapes. - Means more tapes going offsite if just partially filled, and whereas offsite means trucking, that costs more. - Offsite is for the rarity of disaster recovery, and the cost of collocation for offsite vols can almost never be justified for that rarity. See also: ANR1173E Collocation and RESTORE Volume RESTORE Volume operations are also affected by collocation: restoring from a non-collocated copy storage pool into a collocated primary storage pool can be pretty slow because the restore essentially has to sort all the files by node to collocate them in the target primary storage pool. This process could be greatly accelerated by restoring to a disk storage pool and then allowing the files to migrate into your sequential primary storage pool. The reason for this is that files in a disk storage pool are "presorted" in the database to facilitate migration. Collocation by filespace New in AIX server 2.1.5.8 - allows collocation by file system rather than client node. Code "COLlocate=FIlespace" in your stgpool definition. Be aware that this will fill one tape at a time for each of the filespaces. If you are already collocating by node, switching to filespace will cause that to take effect for any new tape that is used in the storage pool. For old tapes, a MOVe Data or a reclaim should separate the files belonging to different filespaces. If the node is a member of a collocation group, the filespace collocation setting overrides that. Collocation by group New in TSM 5.3: a storage pool attribute which then allows nodes to be assigned to a group, whereby the server can then collocate the data for all the nodes in the group. This constitutes a convenient way to combine the data for systems in a mail cluster, for example. Note that this change alters the default for collocation, which is now GROUP (rather than NO); and where no groups are defined, the effective default is collocation by node. This adds column COLLOCGROUP_NAME to the NODES table. To see what nodes are storing into a group-collocated storage pool, you could perform the following (expensive) select: Select NODE_NAME from VOLUMEUSAGE where STGPOOL_NAME='________'. See also: COLLOCGROUP; DEFine COLLOCGroup; DEFine COLLOCMember; Query COLLOCGroup Collocation by node To combine all data for one node on as few media volumes as possible. If the node is a member of a collocation group, the node collocation setting overrides that. Collocation groups See: Collocation by group Collocation "not working" If you find multiple nodes or filespaces being collocated onto a single volume in spite of your specifications, it can be the natural consequence of one of: - Running out of writable volumes such that *SM has no choice but to combine; - Your STGpool MAXSCRatch has enforced a limit on the number of volumes used, you exceed it, and in a logical sense run out of volumes. - Your volumes limit is reached while there is still space on a collocated volume, but that volume is busy such that it cannot be used for the session or process which needs to write in collocated mode. You can check volume content to see when the non-collocation occurred and correlate that with your Activity Log to see what transpired at the time. See also: MAXSCRatch COLLOCGROUP New table in TSM 5.3 for collocation groups. Partial list of columns: NODE_NAME See also: DEFine COLLOCGroup; DEFine COLLOCMember; Query COLLOCGroup Column In a silo style tape library such as the 3583, refers to a vertical section of the library which contains storage cells and/or tape drives which the robotic actuator may reach when positions to that arc position around the circumference in the library. Column attributes (TSM db table) To see the attributes for a specific column in the TSM database, do like: SELECT * FROM COLUMNS WHERE COLNAME='MESSAGE' AND TABNAME='ACTLOG' Column title In SQL reports from the TSM database, the title above the data reported from the requested database table columns, underlined by hyphens. By default, the title is the name of the column: override via the AS parameter. Column width in command output The column width in command output can be annoyingly narrow, forcing modest schedule and like names onto multiple lines of the column. Unfortunately, there is nothing you can do to mitigate this in your command executions. (But, file a formal request with IBM to make the columns wider.) Column width in Select output See: SELECT output, column width -COMMAdelimited dsmadmc option for reporting with output being comma-delimited. Contrast with -TABdelimited. See also: -DISPLaymode Command, define your own There is no facility in TSM for defining your own server command, to be invoked solely by name. The closest things are macros and scripts, as described in the Admin Guide. Command, generate from SELECT See: SELECT, literal column output Command format (HSM) Control via the OPTIONFormat option in the Client User Options file (dsm.opt): STANDARD for long-form, else SHORT. Default: STANDARD Command line, continuing Code either a hyphen (-) or backslash (\) at the end of the line and contine coding anywhere on the next line. Command line, max len of args Command line arguments data cannot be more than the ARG_MAX value in /usr/include/sys/limits.h (AIX). Command line client Refers to the command-based client, or Command Line Interface (CLI), rather than the window-oriented (GUI) client. Note that the GUI is a convenience facility: as such its performance is inferior to that of the command line client, and so should not be used for time-sensitive purposes such as disaster recovery. (So says the B/A Client manual, under "Performing Large Restore Operations".) Command line editing See: Editor Command line history See: Editor Command line length limit, client See: dsmc command line limits Command line mode for server cmds Start an "administrative client session" to interact with the server from a remote workstation, via the command: 'dsmadmc', as described in the ADSM Administrator's Reference. Command line recall See: Editor Command output, capture in a file The best approach to capturing ADSM server command output is to use the form: "dsmadmc -OUTfile=SomeFilename___" Alternately you can selectively redirect the output of commands by using ' > ' and ' >> ' (redirection). Command output, suppress Use the Client System Options file (dsm.sys) option "Quiet". See also: VERBOSE Command routine ADSMv3: Command routing allows the server that originated the command to route the command to multiple servers and then to collect the output from these servers. Format: Server1[,ServerN]: server cmd Commands, uncommitted, roll back 'rollback' COMMIT TSM server command used in a macro to commit command-induced changes to the TSM database. Syntax: COMMIT See also: Itemcommit Commit Database term for proceeding with the writing of pending database updates. This operation commits permanent changes to the database. In TSM, this is governed by server specs LOGPoolsize and TXNGroupmax. TSM holds records of uncommitted database updates in its Recovery Log file and Recovery Log buffer pool. At the end of a transaction, the updates intended by the transaction are then written to the database. (In Roll-forward mode, the Recovery Log retains records of all database updates since the last dbbackup.) See: CKPT; LOGPoolsize Msgs: ANR0687E COMMMethod Server Options File operand specifying one or more communications methods which the server will accept as the ways in which clients reach the server. (The client's COMMMethod choice determines which one is used in that client-server session.) Should specify at least one of: HTTP (for Web admin client) IPXSPX (discontinued in TSM4) NETBIOS (discontinued in TSM4) NONE (to block external access to the server) SHAREDMEM (shared memory, within a single computer system) SNALU6.2 (APPC - discontinued in TSM4) SNMP TCPIP (the default, being TCP, not UDP) (Ref: Installing the Server, Chap. 5) COMMMethod Client System Options file (dsm.sys) option to specify the one communication method to use to reach each server. Should specify one of: 3270 (discontinued in TSM4) 400comm HTTP (for Web Admin) IPXspx NAMEdpipe NETBios PWScs SHAREdmem (shared memory, within a single computer system) SHMPORT SNAlu6.2 TCPip (is TCP, not UDP) Be sure to code it, once, on each server stanza. See also: Query SEssion; Shared memory COMMmethod server option, query 'Query OPTion CommMethod' You will see as many "CommMethod" entries as were defined in the server options file. Common Programming Interface A programming interface that allows Communications (CPIC) program-to-program communication using SNA LU6.2. See Systems Network Architecture Logical Unit 6.2. Discontinued as of TSM 4.2. COMMOpentimeout Definition in the Server Options File. Specifies the maximum number of seconds that the ADSM server waits for a response from a client when trying to initiate a conversation. Default: 20 seconds. Ref: Installing the Server... COMMTimeout Definition in the Server Options File. Specifies the communication timeout value in seconds: how long the server waits during a database update for an expected message from a client. Default: 60 seconds. Too small a value can result in ANR0481W session termination and ANS1005E. A value of 3600 is much more realistic. A large value is necessary to give the client time to rummage around in its file system, fill a buffer with files' data, and finally send it - especially for Incremental backups of large file systems having few updates, where the client is out of communication with the server for large amounts of time. If client compression is active, be sure to allow enough time for the client to decompress large files. Ref: Installing the Server... See also: IDLETimeout; SETOPT; Sparse files, handling of, Windows COMMTimeout server option, query 'Query OPTion CommTimeOut' Communication method "COMMmethod" definition in the server options file. The method by which a client and server exchange information. The UNIX application client can use the TCP/IP or SNA LU6.2 method. The Windows application client can use the 3270, TCP/IP, NETBIOS, or IPX/SPX method. The OS/2 application client can use the 3270, TCP/IP, PWSCS, SNA LU6.2, NETBIOS, IPX/SPX, or Named Pipe method. The Novell NetWare application client can use the IPX/SPX, PWSCS, SNA LU6.2, or TCP/IP methods. See IPX/SPX, Named Pipe, NETBIOS, Programmable Workstation Communication Service, Systems Network Architecture Logical Unit 6.2, and Transmission Control Protocol/Internet Protocol. See also: Memory IPC Communication protocol A set of defined interfaces that allows computers to communicate with each other. Communications timeout value, define "COMMTimeout" definition in the server options file. Communications Wait (CommW, commwait) "Sess State" value in 'Query SEssion' for when the server was waiting to receive expected data from the client or waiting for the communication layer to accept data to be sent to the client. An excessive value indicates a problem in the communication layer or in the client. Recorded in the 23rd field of the accounting record, and the "Pct. Comm. Wait Last Session" field of the 'Query Node Format=Detailed' server command. See also: Idle Wait; Media Wait; RecvW; Run; SendW; Sess State; Start COMM_WAIT See: Communications Wait CommW See: Communications Wait commwait See: Communications Wait Competing products ARCserve; Veritas; www.redisafe.com; www.graphiumsoftware.com Compile Time (Compile Time API) Refers to a compiled application, which may emply a Run Time API (q.v.). The term "Compile Time API" may be employed with a TDP, which is a middleware application which employs both the TDP subject API (database, mail, etc.) plus the TSM API. Compress files sent from client to Can be defined via COMPRESSIon option server? in the dsm.sys Client System Options file. Specifying "Yes" causes *SM to compress files before sending them to the *SM server. Worth doing if you have a fast client processor. "Compressed Data Grew" A condition sometimes seen in backups which use client compression, where the data actually grew in size rather than got smaller in the effort. This typically results because the data was already compressed, and the compression algorithm used by the client is not one which prevents growth. (Modern tape drives employ an advanced compression algorithm which does prevent growth.) This growth condition may be experienced in Exchange database backups where users have email attachements that are ZIP files. COMPRESSAlways Client User Options file (dsm.opt) option to specify handling of a file which *grows* during compression as the client prepares to send the object to the TSM server. (Compression must be in effect, either by client option or node definition, for this option to come into play.) Default: v2: No, do not send the object if it grows during compression. v3: Yes, do send if it grows during compression. Notes: Specifying No can result in wasted processing... The TXNGroupmax and TXNBytelimit options govern transaction size, and if a file grows in compression when COMPRESSAlways=No, the whole transaction and all the files involved within it must be processed again, without compression. This will show up in the "Objects compressed by:" backup statistics number being negative (like "-29%"). Where feasible, use EXCLUDE.COMPRESSION rather than depend upon COMPRESSAlways, in that the latter is costly in having to re-drive transactions. Messages: ANS1310E; ANS1329S IBM Technotes: 1156827; 1049604; 1322625 Compression Refers to data compression, the primary objective being to save storage pool space, and secondarily data transfer time. Compression can be performed by TSM, or by hardware that it uses... TSM compression is governed according to REGister Node settings, client option settings (COMPRESSIon), and Devclass Format. Object attributes may also specify that the data has already been compressed such that TSM will not attempt to compress it further. Drives: Either client compression or drive compression should be used, but not both, as the compression operation at the drive may actually cause the data to expand. EXCLUDE.COMPRESSION can be used to defeat compression for certain files during Archive and Backup processing. Though compression is a node option, it is not necessarily the case that all files backed up by the node are compressed, as is evident by the COMPRESSAlways option and the available EXCLUDE.COMPRESSION option. This indicates that compression is a stored object attribute. (There is no way of querying it that I know of, however.) Thus, the restorability of a given file is independent of the current value of the node compression option. Tape drives also perform compression, which is to say that compression algorithms are built into its firmware such that the drive can compress data when writing, and then recognize that data has been compressed and uncompress it when reading. Compression has no effect upon the physical capacity of a tape... Some people get the odd impression that compression by tape drive somehow makes the bytes smaller as it writes them to the tape, and thus can pack more of them onto the tape. The reality is that a tape, in conjunction with a given drive technology, has a fixed capacity. For example, an LTO2 tape has a native capacity of 200 GB: it can hold only 200 billion bytes, and no more (think of them as cells on the media). When drive compression is turned on and the data is compressible, the drive writes a *representation* of the original data into the "cells", and can use fewer of them. Ref: TSM Admin Guide, "Using Data Compression" IBM Technotes: 1106967 See also: File size COMPression= Operand of REGister Node to control client data compression: No The client may not compress data sent to the server - regardless of client option values. Each client session will show: "Data compression forced off by the server" in the headings, just under the Server Version line of the client log. Yes The client must always compress data sent to the server - regardless of client options. Each client session will show: "Data compression forced on by the server" in the headings, just under the Server Version line of the client log. Client The client may choose whether or not to compress data sent to the server, via client options. Default: COMPression=Client See also: Objects compressed by COMPRESSIon (client compression) Client option. In Unix, code in the Client System Options file (dsm.sys), within a server stanza. Specifying "Yes" causes *SM to compress files before sending them to the TSM server, during Backup and Archive operations, for storage as given - if the server allows the client to make a choice about compression, via "COMPRESSIon=Client" in 'REGister Node'. The compression is effected in a stream manner, in the client's memory buffers which hold data recently read. (It is not the case that a compressed copy of the file is first written to some temporary disk storage area before sending that to the TSM server.) Conversely, the client has to uncompress the files in a restoral or retrieval. (The need for the client to decompress the data coming back from the server is implicit in the data, and thus is independent of any currently prevailing client option.) Is client compression good or bad? It: - Reduces network utilization (in both backup *and* restoral). - Reduces TSM server disk storage pool utilization. - Reduces I/O work in the TSM server. - Is perhaps the best way to effect compression should the final TSM storage media (tape drives) not have compression capability. - Affords some degree of data security in making network traffic and pilfered tapes hard to decipher by miscreants. With fast processors being common, now client compression is much more realistic. Things to consider: - If your client system is trying to perform some intensive application work, client compression will impair that, and compression will slow. - Better compression may be had in tape drive hardware than is available on the client. Beware: If the file expands during compression then TSM will restart the entire transaction - which could involve resending other files involved in that transaction, per the TXNGroupmax / TXNBytelimit values. The slower your client, the longer it takes to compress the file, and thus the longer the exposure to this possibility. Check at client by doing: 'dsmc Query Option' The dsmc session summary will contain the extra line: "Compression percent reduction:", which is not present without compression. Note that during the operation the progress dots will be fewer and slower than if not using compression. With "COMPRESSIon Yes", the server COMMTimeout option becomes more important - particularly with large files - as the client takes considerable time doing decompression. How long does compression take? One way to get a sense of it is to, outside of TSM, compress a copy of a typical, large file that is involved in your backups, performing the compression with a utility like gzip. Where the client options call for both compression and encryption, compression is reportedly performed before encryption - which makes sense, as encrypted data is effectively binary data, which would either see little compression, or even expansion. And, encryption means data secured by a key, so it further makes sense to prohibit any access to the data file if you do not first have the key. See also: Sparse files, handling of, Windows Compression, by tape drive Once the writing of a tape has begun with or without compression, that method will persist for the remainder of the tape is full. Changing Devclass FORMAT will affect only newly used tapes. Compression, client, control methods Client compression may be controlled by several means: - Client option file spec. - Client Option Set in the server. (Do 'dsmc query options' to see what's in effect, per options file and server side Option Set.) - Mandated in the server definition of that client node. If compression is in effect by any of the above methods, it will be reflected in the statistics at the end of a Backup session ("Objects compressed by:"). Compression, overriding Yes to No You may have the following client option in force: COMPRESSIon Yes To perform an out-of-band backup with compression turned off, do like: dsmc i -COMPRESSIon=No ... You can verify that the override is working, without performing a backup via: dsmc q opt compression -COMPRESSIon=No The override cannot work if the Node definition forces compression, or if there is a server-side clientopt specifying COMPRESSION Force=Yes. Note that EXCLUDE.COMPRESSION can even override server-side forces. Compression, prove it's happening Proving that client data compression is accomplished in the same way as outlined in topic "Encryption, prove it's happening". Compression actually achieved, to tape TSM doesn't get involved in how a storage device (i.e., drive+tape) actually store the data which TSM sends to the hardware, so doesn't track the hardware compression rate. But you can compute that, knowing the physical capacity of the medium as used with that kind of drive, and how much data TSM wrote to it as of the time it became Full. Compression algorithm, client Is Ziv Lempel (LZI), the same as that used in pkzip, MVS HAC, and most likely unix as well, and yes the data will normally grow when trying to compress it for a second time, as in a client being defined with COMPRESSAlways=Yes and a compressed file being backed up. Per the 3590 Intro and Planning Guide: "Data Compression is not recommended for encrypted data. Compressing encrypted data may reduce the effective tape capacity." This would seem to say that any tough binary data, like pre-compressed data from a *SM client, would expand rather than compress, due to the expectations and limitations of the algorithm. Compression being done by client node? Controlled by the COMPression parameter (before it sends files to server for on the 'REGister Node' and 'UPDate Node' backup and archive) commands. Default: Client (it determines whether to compress files). Query from ADSM server: 'Query Node Format=Detailed'. "Yes" means that it will always compress files sent to server; "No" means that it won't. Query from client: 'dsmc Query Option' for ADSM, or 'dsmc show options' for TSM look for "Compression". Is also seen in result from client backup and archive, in "Objects compressed by:" line at end of job. Compression being done by *SM server Controlled via the DEVclass "FORMAT" on 3590 tape drives? operand. Compression being done by tape drive? Most tape drives can perform hardware compression of data. (The 3590 can.) Find out via the AIX command: '/usr/sbin/lsattr -E -l rmt1' where "rmt1" is a sample tape drive name. TSM will set compression according to your DEVclass FORMAT=____ value. You can use SMIT to permanently change this, or do explicit: 'chdev -l rmt1 compress=yes|no'. You can also use the "compress" and "nocompress" keywords in the 'tapeutil' or 'ntutil' command to turn compression on and off for subsequent *util operations (only). Concatenation in SQL Use the CONCAT() operator to perform concatenation of strings. Syntax: CONCAT(String1, String2) Example: Select ... LIKE CONCAT('$1', '%') See also: || Configuration file An optional file pointed to by your application that can contain the same options that are found in the client options file (for non-UNIX platforms) or in the client user options file and client system options file (for UNIX platforms). If your application points to a configuration file and values are defined for options, then the values specified in the configuration file override any value set in the client options files. Configuration Manager See: Enterprise Configuration and Policy Management Connect Agents Commercial implementations of the ADSM API to provide high-performance, integrated, online backups and restores of industry-leading databases. TSM renamed them to "Data Protection" (agents) (q.v.). See http://www.storage.ibm.com/ software/adsm/addbase.htm Console mode See: -CONsolemode; Remote console -CONsolemode Command-line option for ADSM administrative client commands ('dsmadmc', etc.) to see all unsolicited server console output. Sometimes referred to as "remote console". Results in a display-only session (no input prompt - you cannot enter commands). And unlike the Activity Log, no date-timestamp leads each line. Start an "administrative client session" via the command: 'dsmadmc -CONsolemode'. To have Operations monitor ADSM, consider setting up a "monitor" admin ID and a shell script which would invoke something to the effect of: 'dsmadmc -ID=monitor -CONsolemode -OUTfile=/var/log/ADSMmonitor.YYYYMMDD' and thus see and log events. Note that TSM administrator commands cannot be issued in Console Mode. See also: dsmadmc; -MOUNTmode Ref: Administrator's Reference Consumer Session In Backup, the session which actually performs the data backup, including compression, encryption, and transmission. (To use an FTP analogy, this is the "data channel".) Sometimes called the "data thread" or the "data mover thread". The Consumer Session corresponds to the client Consumer Thread, and is enduring. In accounting records, Consumer Sessions may be distinguished from their related Producer Sessions only by virtue of fields 16 and 17 being zero in Producer sessions. In active sessions, there is no definitive flagging. Contrast with: Producer Session See also: RESOURceutilization Consumer Thread The client process thread which does all the "heavy lifting" in processsing transactions assigned to it by the Producer Thread, to compress, encrypt, and transmit data to the TSM server. See also: Client thread types "cont> " This is the prompt that appears in a dsmadmc session where a command that is being entered is continuing on a next logical line. This is most commonly seen where the command line is being broken up into sements, as in like: UPDate SCHedule blah - where a hyphen at the end of the line tells dsmadmc that the command is being continued on the next logical line. A backslash (\) is equivalent to a hyphen as a continuation character. Contemporary Cybernetics 8mm drives 8510 is dual density (2.2gig and 5gig). (That brand was subsumed by Exabyte: see http://www.exabyte.com/home/ products.html for models.) Content Manager CommonStore CommonStore seamlessly integrates SAP R/3 and Lotus Domino with leading IBM archive systems such as IBM Content Manager, IBM Content Manager OnDemand, or TSM. The solution supports the archiving of virtually any kind of business information, including old, inactive data, e-mail documents, scanned images, faxes, computer printed output and business files. You can offload, archive, and e-mail documents from your existing Lotus Notes databases onto long-term archive systems. You can also accomplish a fully auditable document management system with your Lotus Notes client. http://www.ibm.com/software/data/ commonstore/ CONTENTS (Contents table) SQL: The *SM database table which is the entirety of all filespaces data. Along with Archives and Backups tables, constitutes the bulk of the *SM database contents. Columns: VOLUME_NAME [indexed], NODE_NAME (upper case), TYPE (Bkup, Arch, SpMg), FILESPACE_NAME (/fs), FILE_NAME (/subdir/ name), AGGREGATED (No, or n/N), FILE_SIZE (the Aggregate size), SEGMENT (n/N), CACHED (Yes/No), FILESPACE_ID, FILESPACE_HEXNAME, FILE_HEXNAME (n/N designates number within count) Sample data: VOLUME_NAME: 002339 NODE_NAME: ACSGM03 TYPE: Bkup FILESPACE_NAME: /ms/k FILE_NAME: /6/ brownthe AGGREGATED: 3/4 FILE_SIZE: 12461169 SEGMENT: 1/1 CACHED: No FILESPACE_ID: 12 FILESPACE_HEXNAME: FILE_HEXNAME: Note that there is no column for whether a file has a copy storage pool instance, as reported by the Query CONTents command, nor any count limiter. Whereas the Backups table records a single instance of the backed up file, the Contents table records the primary storage pool instance plus all copy storage pool instances. Note that no timestamp is available for the file objects: that info can be obtained from the Backups table. But a major problem with the Contents is the absence of anything to uniquely identify the instance of its FILE_NAME, to be able to correlate with the corresponding entry in the Backups table, as would be possible if the Contents table carried the OBJECT_ID. The best you can do is try to bracket the files by creation timestamp as compares with the volume DATE_TIME column from the Volhistory table and the LAST_WRITE_DATE from the Volumes table. See HL_NAME and LL_NAME for more about what's in the FILE_NAME column. WARNING: A Select on the CONTENTS table is *VERY* expensive and disruptive, even on a VOLUME_NAME, despite that being an indexed field. Just a Select on a single volume can take hours, and when it's running, other TSM server operations may not run. See also: BACKUPS; FILE_SIZE; Query CONtent Continuation and quoting Specifying things in quotes can always get confusing... When you need to convey an object name which contains blanks, you must enclose it in quotes. Further, you must nest quotes in cases where you need to use quotes not just to convey the object to *SM, but to have an enclosing set of quotes stored along with the name. This is particulary true with the OBJECTS parameter of the DEFine SCHedule command for client schedules. In its case, quoted names need to have enclosing double-quotes stored with them; and you convey that composite to *SM with single quotes. Doing this correctly is simple if you just consider how the composite has to end up... Wrong: OBJECTS='"Object 1"'- '"Object 2"' Right: OBJECTS='"Object 1" '- '"Object 2"' That is, the composite must end up being stored as: "Object 1" "Object 2" for feeding to and proper processing by the client command. The Wrong form would result in: "Object 1""Object 2" mooshing, which when illustrated this way is obviously wrong. The Wrong form can result in a ANS1102E error. Ref: "Using Continuation Characters" in the Admin Ref. Continuing server command lines Code either a hyphen (-) or backslash (continuation) (\) at the end of the line and contine coding anywhere on the next line. Continuing client options Lines in the Client System Options File (continuation) and Client User Options File are not continued per se: instead, you re-code the option on successive lines. For example, the DOMain option usually entails a lot of file system names; so code a comfortable number of file system names on each line, as in: DOMain /FileSystemName1, ... DOMain /FileSystemName7, ... Continuous backup See: IBM Tivoli Continuous Data Protection for Files Control Session See: Producer Session Count() SQL function to calculate the number of records returned by a query. Note that this differs from Sum(), which computes a sum from the contents of a column. Convenience Eject category 3494 Library Manager category code FF10 for a tape volume to be ejected via the Convenience I/O Station. After the volume has been so ejected its volser is deleted from the inventory. Convenience Input-Output Station 3494 hardware feature which provides 10 (Convenience I/O) access slots in the door for inputting cartridges to the 3494 or receiving cartridges from it. May also be used for the transient mounting of tapes for immediate processing, not to become part of the repository. The Convenience I/O Station is just a basic pass-through area, and should not be confused with the more sophisticated Automatic Cartridge Facility magazine available for the 3590 tape drive. We find that it takes some 2 minutes, 40 seconds for the robot to take 10 tapes from the I/O station and store them into cells. When cartridges have been inserted from the outside by an operator, the Operator Panel light "Input Mode" is lit. It changes to unlit as soon as the robot takes the last cartridge from the station. When cartridges have been inserted from the inside by the robot, the Operator Panel light "Output Mode" is lit. The Operator Station System Summary display shows "Convenience I/O: Volumes present" for as long as there are cartridges in the station. See also the related High Capacity Output Facility. Convenience I/O Station, count of See: 3494, count of cartridges in cartridges in Convenience I/O Station CONVert Archive TSM4.2 server command to be run once on each node to improve the efficiency of a command line or API client query of archive files and directories using the Description option, where many files may have the same description. Previously, an API client could not perform an efficient query at all and a Version 3.1 or later command line client could perform such a query only if the node had signed onto the server from a GUI at least once. Syntax: CONVert Archive NodeName Wait=No|Yes Msgs: ANR0911I COPied COPied=ANY|Yes|No Operand of 'Query CONtent' command, to specify whether to restrict query output either to files that are backed up to a copy storage pool (Yes) or to files that are not backed up to a copy storage pool (No). Copy Group A policy object assigned to a Management Class specifying attributes which control the generation, destination, and expiration of backup versions of files and archived copies of files. It is the Copy Group which defines the destination Storage Pools to use for Backup and Archive. ADSM Copygroup names are always "STANDARD": you cannot assign names, which is conceptually pointless anyway in that there can only be one copygroup of a given type assigned to a management class. 'Query Mgm' does not reveal the Copygroups within the management class, unfortunately: you have to do 'Query COpygroup'. Note that Copy Groups are used only with Backup and Archive. HSM does not use them: instead, its Storage Pool is defined via the MGmtclass attribute "MIGDESTination". See "Archive Copy Group" and "Backup Copy Group". Copy group, Archive type, define See: DEFine COpygroup, archive type Copy group, Backup type, define See: DEFine COpygroup, backup type Copy group, Archive, query 'Query COpygroup [CopyGroupName] (defaults to Backup type copy group) Type=Archive' Copy group, Backup, query 'Query COpygroup [CopyGroupName] (defaults to Backup type copy group) [Type=Backup]' Copy group, delete 'DELete COpygroup DomainName PolicySet MgmtClass [Type=Backup|Archive]' Copy group, query 'Query COpygroup [CopyGroupName]' (defaults to Backup type copy group) COPy MGmtclass Server command to copy a management class within a policy set. (But a management class cannot be copied across policy domains or policy sets.) Syntax: 'COPy MGmtclass DomainName SetName FromClass ToClass' Then use 'UPDate MGmtclass' and other UPDate commands to tailor the copy. Note that the new name does not make it into the Active policy set until you do an ACTivate POlicyset. Copy Mode See: Backup Copy Group Copy Storage Pool A special storage pool, being sequential type, consisting of serial volumes (tapes) whose purpose is to provide space to have a surety backup of one or more levels in a standard Storage Pool hierarchy. The Copy Storage Pool is employed via the 'BAckup STGpool' command (q.v.). There cannot be a hierarchy of Copy Storage Pools, as can be the case with Primary Storage Pools. Be aware that making such a Copy results in that much more file information being tracked in the database...about 200 bytes for each file copy in a Copy Storage Pool, which is added to the file's existing database entry rather than create a separate entry. Copy Storage Pools are typically not collocated because it would mean a mount for every collocated node or file system, which could be a lot. Note that there is no way to readily migrate copy storage pool data, as for example when you want to move to a new tape technology and want to transparently move (rather than copy) the current data. Object copies in the copy pool contain pointers back to the primary pool where the original object resides. If the object is moved from its current primary pool to another primary pool, as via MOVe NODEdata, the copy pool object's pointer is updated to point to the new primary pool location. The only way that the file's space in the copy pool is released is if the file expires such that it is no longer in the primary storage pool. Technotes: 1222377 Ref: Admin Guide topic Estimating and Monitoring Database and Recovery Log Space Requirements Copy Storage Pool, define See: DEFine STGpool (copy) Copy Storage Pool, delete TSM provides no means for deleting a populated copy storage pool via a single command. Instead, you need to delete each of its volumes individually. Thereafter you can do DELete STGpool to dispose of the pool name. Copy Storage Pool, delete node data You cannot directly delete a node's data from a copy storage pool; but you can circuitously effect it by using MOVe NODEdata to shift the node's data to separate tapes in the copy stgpool (temporarily changing the stgpool to COLlocate=Yes), and then doing DELete Volume on the newly written volumes. Copy storage pool, files not in Invoke 'Query CONtent' command with COPied=No to detect files which are not yet in a copy storage pool. Copy Storage Pool, moving data You don't: if you move the primary storage pool data to another location you should have done a 'BAckup STGpool' which will create a content-equivalent area, whereafter you can delete the volumes in the old Copy Storage Pool and then delete the old Copy Storage Pool. Note that neither the 'MOVe Data' command nor the 'MOVe NODEdata' command will move data from one Copy Storage Pool to another. Copy Storage Pool, remove filespace Perform the same work as described in "Copy Storage Pool, remove node from" but also include "FIlespace=_________" on the MOVe NODEdata operation. Copy Storage Pool, remove node from Sometimes you end up with a node's data represented in an onsite copy pool, when it is not the case that the node should have such data replication. This is wasted media space, and can be eliminated - but, there is no simple command to do this - made more complex by copy pools typically not being collocated, such that data from everywhere is intermingled on the media. What you have to do is isolate the data, then obliterate it. Here's how... Temporarily change any Filling volume for the copy pool to Readonly, so as to force our next operation to start on fresh media. Now do: 'MOVe NODEdata FROMstgpool= TOstgpool=' This serves to get isolate the data. Perform periodic 'Query PRocess' commands and record the volumes being used for the output of the MOVe. When the MOVe is complete, perform 'DELete Volume DISCARDdate=Yes' on each of those just-written volumes. The desired data is now gone from the copy pool. Change the Filling volumes of the copy pool back to Readwrite. Copy Storage Pool, restore files Yes, if the primary storage pool is directly from unavailable or one of its volumes is destroyed, data can be obtained directly from the copy storage pool Ref: TSM Admin Guide chapter 8, introducing the Copy Storage Pool: ...when a client attempts to retrieve a file and the server detects an error in the file copy in the primary storage pool, the server marks the file as damaged. At the next attempt to access the file, the server obtains the file from a copy storage pool. Ref: TSM Admin Guide, chapter Protecting and Recovering Your Server, Storage Pool Protection: An Overview... "If data is lost or damaged, you can restore individual volumes or entire storage pools from the copy storage pools. TSM tries to access the file from a copy storage pool if the primary copy of the file cannot be obtained for one of the following reasons: - The primary file copy has been previously marked damaged. - The primary file is stored on a volume that is UNAVailable or DEStroyed. - The primary file is stored on an offline volume. - The primary file is located in a storage pool that is UNAVailable, and the operation is for restore, retrieve, or recall of files to a user, or export of file data." Copy Storage Pool, restore volume from 'RESTORE Volume ...' Copy Storage Pool & disaster recovery The Copy Storage Pool is a secondary recovery vehicle after the Primary Storage Pool, and so the Copy Storage Pool is rarely collocated for optimal recovery as the Primary pool often is. This makes for a big contention problem in disaster recovery, as each volume may be in demand by multiple restoral processes due to client data intermingling. A somewhat devious approach to this problem is to define the Devclass for the Copy Storage Pool with a FORMAT which disables data compression by the tape drive, thus using more tapes, and hence reducing the possibility of collision. Consider employing multiple management classes and primary storage pools with their own backup storage pools to distribute data and prevent contention at restoral time. If you have both high and low density drives in your library, use the lows for the Copy Storage Pool. Or maybe you could use a Virtual Tape Server, which implicitly stages tape data to disk. Copy Storage Pool up to date? The following Select submitted by Wanda Prather is an excellent way to tell: SELECT SUM(CASE when q.POOLTYPE = 'PRIMARY' then o.NUM_FILES else -o.NUM_FILES end) as Net_Files from OCCUPANCY o, STGPOOLS q where o.STGPOOL_NAME = q.STGPOOL_NAME and q.POOLTYPE in( 'PRIMARY', 'COPY' ) Another: SELECT s.POOLTYPE, Cast(Sum(o.NUM_FILES) as Decimal(9)) as "FILES", Cast(Sum(o.PHYSICAL_MB) as Decimal(13,2)) as "MB" from STGPOOLS s, OCCUPANCY o where o.STGPOOL_NAME = s.STGPOOL_NAME group by s.POOLTYPE Run such a check after your set of BAckup STGpool tasks are done. If the result is non-zero, the copy storage pools aren't up to date. This comparison Select proved useful to one customer who had experienced a TSM database problem, supposedly resolved with IBM's help, but the fix had resulted in primary storage pool entries being deleted but the corresponding entries in the copy storage pool becoming "zombies" which BAckup STGpool would not reconcile. You can create variations on this theme to check one pool pair at a time, if that is more appropriate for you. Copy Storage Pool volume damaged If a volume in a Copy Storage Pool has been damaged - but is not fully destroyed - try doing a MOVe Data first in rebuilding the data, rather than just deleting the volume and doing a fresh BAckup STGpool. Why? If you did the above and then found the primary storage pool volume also bad, you would have unwittingly deleted your only copies of the data, which could have been retrieved from that partially readable copy storage pool volume. So it is most prudent to preserve as much as possible first, before proceeding to try to recreate the remainder. Copy Storage Pool volume destroyed If a volume in a Copy Storage Pool has been destroyed, the only reasonable action is to make this known to ADSM by doing 'DELete Volume' and then do a fresh 'BAckup STGpool' to effectively recreate its contents on another volume. (Note that Copy Storage Pool volumes cannot be marked DEStroyed.) Copy Storage Pools current? The Auditocc SQL table allows you to quickly determine if your Copy Storage Pools have all the data in the Primary Storage Pools, by comparing: BACKUP_MB to BACKUP_COPY_MB ARCHIVE_MB to ARCHIVE_COPY_MB SPACEMG_MB to SPACEMG_COPY_MB If the COPY value is higher, it indicates that you have the same data in multiple Copy Storage Pools, as in an offsite pool as well as an onsite copy storage pool. In any case, the *_COPY_MB value should be an even multiple of the primary pool value. COPY_TYPE Column in VOLUMEUSAGE SQL table denoting the types of files: Backup, ARchive, etc. COPYContinue DEFine/UPDate STGpool operand for how the server should react when COPYSTGpools is in effect and an error is encountered in generating the copy storage pool image. The default is Yes, to continue copying, but not to the problem copy storage pool, for the duration of that client backup session. A new session will begin with no prior state information about previous problems. Note that this option may be useless with TDPs, which don't retry transactions. Msgs: ANR4737E Copygroup See: Copy Group COPYGROUPS There is no COPYGROUP or COPYGROUPS SQL table in TSM's database. Instead, there are AR_COPYGROUPS and BU_COPYGROUPS (q.v.). COPYSTGpools TSM 5.1+ storage pool parameter, part of the Simultaneous Write function, providing the possibility to simultaneously store a client's files into the usual target primary storage pool as well as one or more copy storage pools or active-data pools. The simultaneous write to the copy pools only takes place during backup or archive from the client. In other words, when the data enters the storage pool hierarchy. It does not take place during data migration from an HSM client nor on a LAN free backup from a Storage Agent. Naturally, if your storage pools are on tape, you will need a tape drive for the primary storage pool action and the copy storage pool action: 2 drives. Your mount point usage values must accommodate this. Maximum length of the copy pool name: 30 chars Maximum number of copy pool names: 10, separated by commas (no intervening spaces) This option is restricted to only primary storage pools using NATIVE or NONBLOCK data format. The COPYContinue parameter may also be specified to further govern operation. Note: The function provided by COPYSTGpools is not intended to replace the BACKUP STGPOOL command. If you use the COPYSTGpools parameter, continue to use BACKUP STGPOOL to ensure that the copy storage pools are complete copies of the primary storage pool. There are cases when a copy may not be created. See also: Simultaneous Writes to Copy Stpools COUNT(*) SQL statement to yield the number of rows satisfying a given condition: the number of occurrences. There should be as many elements to the left of the count specification as there are specified after the GROUP BY, else you will encounter a logical specification error. Example: SELECT OWNER, COUNT(*) AS "Number of files" FROM ARCHIVES GROUP BY OWNER SELECT NODE_NAME, OWNER, COUNT(*) AS "Number of files" FROM ARCHIVES GROUP BY NODE_NAME,OWNER See also: AVG; MAX; MIN; SUM COUrier DRM media state for volumes containing valid data and which are in the hands of a courier, going offsite. Their next state should be VAULT. See also: COURIERRetrieve; MOuntable; NOTMOuntable; VAult; VAULTRetrieve COURIERRetrieve DRM media state for volumes empty of data, which are being retrieved by a courier. Their next state should be ONSITERetrieve. See also: COUrier; DRM; MOuntable; NOTMOuntable; VAult; VAULTRetrieve CPIC Common Programming Interface Communications. .cpp Name suffix seen in some messages. Refers to a C++ programming language source module. CRC Cyclic Redundancy Checking. Available as of TSM 5.1: provides the option of specifying whether a cyclic redundancy check (CRC) is performed during a client session with the server, or for storage pools. The server validates the data by using a cyclic redundancy check which can help identify data corruption. Why should CRC be necessary? Doesn't TCP and like networking take care of that? The TSM 5.1 Technical Guide redbook says: "New communication and SAN hardware products are more susceptible to data loss, thus the need for checksums." Much faster links and disparate routing of packets can also lead to errors in segments reassembly. The CRC values are validated when AUDit Volume is performed and during restore/retrieve processing, but not during other types of data movement (e.g., migration, reclamation, BAckup STGpool, MOVe Data). It is important to realize that when stgpool CRCData=Yes, the CRC values are stored with the data, when it first enters TSM, via Backup or Archive. The co-stored CRC info is thereby stored with the data and is associated with it for the life of that data in the TSM server, and will move with the data even if the data is moved to a storage pool where CRC recording is not in effect. Likewise, if data was not originally stored with CRC, it will not attain CRC if moved into a CRCed storage pool. The Unix 'sum' command performs similar CRC processing. Activated: VALIdateprotocol of DEFine SERver; CRCData operand of DEFine STGpool; REGister Node VALIdateprotocol operand; Verified: "Validate Protocol" value in Query SERver; "Validate Data?" value in Query STGpool Performance issues: IBM Technote 1153107 makes clear that CRC is a "very CPU intensive function", resulting in high CPU usage. Ref: IBM site Technotes 1143615, 1156715, 1079385, 1104372, 1191824 See: VALIdateprotocol CREAT SQLT Undocumented TSM server command to create an ad hoc SQL table, as seen in various Technotes and APARs. Dispose of such an ad hoc table via the command DROP SQLT. Cristie Bare Machine Recovery IBM-sponsored complementary product for TSM: A complete system recovery solution that allows a machine complete recovery from normal TSM backups. http://www.ibm.com/software/tivoli/ products/storage-mgr/cristie-bmr.html Cross-client restoral See: Restore across clients Cross-node restoral See: Restore across clients Cryptography See: Encryption CSQryPending Verb type as seen in ANR0444W message. Reflects client-server query for pending scheduled tasks. CSResults Verb type for a client session, for the client sending session results to the TSM server for ANE message recording. The client schedule log entries corresponding to this are Sending results for scheduled event '________'. Results sent to server for scheduled event '________'. Usually, such a verb state is very short-lived and thus unseen. CST See: Cartridge System Tape See also: ECCST; HPCT; Media Type CST-2 Designation for 3490E (q.v.). ctime In Unix, this is the inode metadata change time for a file, as when a chmod or similar command is performed on the file. (It is *not* the file creation time: unix does not record the file creation time.) See also: atime; mtime Ctime and backups The "inode change time" value (ctime) reflects when some administrative action was performed on a file, as in chown, chgrp, chmod, and like operations. When TSM Backup sees that the ctime value has changed for a previously backed up file, it send the new metadata, but not back up the file again. The backup will show "Updating-->" for that file. Ctrl-C The act of holding down the ASCII keyboard Control key and then pressing the C key, to generate an ETX byte, which conventionally signifies an interrupt to the running process, to rather abruptly stop it. (See the 'stty' man page, INTR parameter.) The formal signal involved is SIGINT. IBM recommends that Ctrl-C not be used with the TSM CLI, as it may result in a program exception or unexpected behavior: use the 'Q' key instead. Current Physical File (bytes): None Seen in Query PRocess output, unchanged over multiple invocations of that command. The vendor does not define why this should be. From observation, my belief is that this is the TSM server deferring further work on the process, in preference to giving service to more demanding, higher priority processes and sessions. Other possibilities (guesses): - The tape from which a copy is being performed is being repositioned to the next Physical File. - A lock is preventing progress. - TSM may be busy doing other things such that the thread is being starved. - At the end of a Move Data operation, where the operation is about to conclude. CURRENT_DATE SQL: Should be the current date, like "2001-09-01". But in ADSM 3.1.2.50, the month number was one more than it should be. Examples: SELECT CURRENT_DATE FROM LOG SELECT * FROM ACTLOG WHERE DATE(DATE_TIME)=CURRENT_DATE See also: Set SQLDATETIMEformat CURRENT_TIME SQL: The current time, like HH:MM:SS format. See also: Set SQLDATETIMEformat CURRENT_TIMESTAMP SQL: The current date and time, like YYYY-MM-DD HH:MM:SS or YYYYMMDDHHMMSS. See also: Set SQLDATETIMEformat CURRENT_USER SQL: Your administrator userid, in upper case. D2D Colloquialism for Disk-to-Disk, as in a disk backup scheme where the back store is disk rather than tape. This seems an appealing concept, but... - Total cost of ownership is very expensive. While a disk drive can be very inexpensive, keeping a disk farm running and reliable is costly (power, cooling, floor space, administrators, management) because it is an active medium, always spinning, whether used or not. - There is typically no data compression performed at the disk subsystem level, as there is to tape (where the tape drive compresses the data). See also: DISK; Disk-only backups D2D backup Really an ordinary backup, where the TSM server primary storage pool is of random access devtype DISK rather serial access FILE or one of the various tape drive types. While customers are often lured to this as a performance panacea, they fail to consider the more important realities of Restore. See topic "DISK" for details. D2T Colloquialism for Disk-to-Tape, as in a disk backup scheme where the back store is tape - the traditional backup medium. Damaged files These are files in which the server found errors when a user attempted to restore, retrieve, or recall the file; or when an 'AUDit Volume' is run, with resulting Activity Log message like: "ANR2314I Audit volume process ended for volume 000185; 1 files inspected, 0 damaged files deleted, 1 damaged files marked as damaged." A file marked as Damaged in one session with the volume does not preclude attempts to read it in subsequent sessions; and, indeed, the file may be readable in that later session, as via a different tape drive. A file marked as Damaged but successfully read in a session does not result in the Damaged flagging being reset: an AUDit Volume is necessary for that. Over time, a volume may end with numerous files flagged as Damaged, with no impairment in volume usage. The "Number of Read Errors" seen from the Query Volume F=D command does not necessarily correlate with the number of files marked as Damaged. TSM knows when there is a copy of the file in the Backup Storage Pool, from which you may recover the file via 'RESTORE Volume', if not 'RESTORE STGpool'. If the client attempts to retrieve a damaged file, the TSM server knows that the file may instead be obtained from the copy stgpool and so goes there if the access mode of the alternate volume is acceptable. The marking of a file as Damaged will not cause the next client backup to again back up the file, given that the supposed damage may simply be a dirty tape drive. Doing an AUDit Volume Fix=Yes on a primary storage pool volume may cause the file to be deleted therefrom, and the next backup to store a fresh copy of the file into that storage pool. Msgs: ANR0548W; ANR1167E See also: Bad tape, how to handle; Destroyed; SHow DAMAGEd Damaged files, find Run 'RESTORE STGpool Preview=Yes' See also: Destroyed; SHow DAMAGEd Damaged files, fix To un-mark a file marked Damaged, run AUDit Volume ... Fix=No which will remove the mark if the data can be read. If still marked Damaged, see "Bad tape, how to handle". Damaged files on a volume, list 'Query CONtent VolName ... DAmaged=Yes' Interestingly, there is no "Damaged" column available to customers in the Contents table in the TSM SQL database. So how to know which tapes to examine, when surveying your library? One, healthy approach is to do a Select on the Volumes table, seeking any volumes with a non-zero READ_ERRORS or WRITE_ERRORS value. There is also the unsupported SHow DAMAGEd command. See also: Destroyed; SHow DAMAGEd dapismp The general name of the sample program provided with the TSM API. Refer to the API manual for info. dapismp.exe Sample program for the TSM API in the Windows environment. On Windows it is necessary to install the client API SDK files to get the API sample files installed. The 'dapismp' executable is included within these sample API files. DAT Digital Audio Tape, a 4mm format which, like 8mm, has been exploited for data backup use. It is a relatively fragile medium, intended more for convenience than continuous use. Note that *SM Devclass refers to this device type as "4MM" rather than "DAT". A DDS cartridge should be retired after 2000 passes, or 100 full backups. A DDS drive should be cleaned every 24 hours of use, with a DDS cleaning cartridge. Head clogging is relatively common. Recording formats: DDS2 and DDS3 (Digital Data Storage). DDS2 - for DDS2 format without compression DDS2C - for DDS2 with hardware compression DDS2 - for DDS3 format without compression DDS3C - for DDS3 format with hardware compression Data access control mode One of four execution modes provided by the 'dsmmode' command. Execution modes allow you to change the space management related behavior of commands that run under dsmmode. The data access control mode controls whether a command can access a migrated file, sees a migrated file as zero-length, or receives an input/output error if it attempts to access a migrated file. See also execution mode. Data channel In a client Backup session, the part of the session which actually performs the data backup. Contrast with: Producer Session See: Consumer Session Data mover A named device that accepts a request from TSM to transfer data and can be used to perform outboard copy operations. As used with Network Addressable Storage (NAS) file server. Related: REGISTER NODE TYPE=NAS Data mover thread See: Consumer session Data ONTAP Microkernel operating system in NetApp systems. Data Protection Agents Tivoli name for the Connect Agents that were part of ADSM. More common name: TDP (Tivoli Data Protection). The TDPs are specialized programs based upon the TSM API to back up a specialized object, such as a commercial database, like Oracle. As such, the TDPs typically also employ an application API so as to mingle within an active database, for example. You can download the TDP software from the TSM web site, but you additionally need a license and license file for the software to work. See also: TDP Data session See: Consumer session "Data shredding" TSM 5.4+ feature wherein the TSM server overwrites data that is moved or deleted from designated random access storage pools - a security and privacy measure, to obliterate abandoned data areas. Not for sequential storage pools - where no such random updating is architecturally supported. Realize that such actions can result in the inability to go back to an earlier image of the TSM database, as shredding thwarts REUsedelay. Data thread In a client Backup session, the part of the session which actually performs the data backup. Contrast with: Producer Session See: Consumer session Data transfer time Statistic in a Backup report: the total time TSM requires to transfer data across the network. Transfer statistics may not match the file statistics if the operation was retried due to a communications failure or session loss. The transfer statistics display the bytes attempted to be transferred across all attempts to send files. Beware that if this value is too small (as when sending a small amount of data) then the resulting Network Data Transfer Rate will be skewed, reporting a higher number than the theoretical maximum. Look instead to the Elapsed time, to compute sustained throughput. Activity Log message: ANE4963I Ref: Backup/Archive Client manual, "Displaying Backup Processing Status". Database The TSM Database is a proprietary database, governing all server operations and containing a catalog of all stored file system objects. All data storage operations effectively go through the database. The TSM Database contains: - All the administrative definitions and client passwords; - The Activity Log; - The catalog of all the file system objects stored in storage pools on behalf of the clients; - The names of storage pool volumes; - In a No Query Restore, the list of files to participate in the restoral; - Digital signatures as used in subfile backups. The database is proprietary and designed especially for TSM's needs. Its structure is not published, per se, but is outlined in the 2001/02 SHARE presentation "Everything You Always Wanted To Know About the TSM Database", by Mike Kaczmarski. Named in dsmserv.dsk, as used when the server starts. (See "dsmserv.dsk".) Customers may perform database queries via the SELECT command (q.v.) and via the ODBC interface. There is indexing. The TSM database is dedicated to the purposes of TSM operation. It is not a general purpose database for arbitrary use, and there is no provided means for adding or thereafter updating arbitrary data. Why a proprietary db, and not something like DB2? Well, in the early days of ADSM, DB2's platform support was limited, so this product-specific, universal database was developed. It is also the case that this db is optimized for storage management operations in terms of schema and locking. But the problem with the old ADSM db is that is is very limited in features, and so a DB2 approach is being re-examined. See also: Database, space taken for files; DEFine SPACETrigger; ODBC; Select Database, back up Perform via TSM server command 'BAckup DB' (q.v.). Usually, the backup is to a scratch tape, but you can specify any tape which is not already defined to a storage pool; or, you can back up to a File type volume. Note that there is no direct query command for later revealing which tape a given database backup was written to: you have to do 'Query VOLHistory Type=DBBackup'. Database, back up unconventionally An unorthodox approach for supporting point-in-time restorals of the ADSM database that came to mind would be to employ standard *SM database mirroring and at an appointed time do a Vary Off of the database volume(s), which can then be image-copied to tape, or even be left as-is, with a replacement disk area put into place (Vary On) rotationally. In this way you would never have to do a Backup DB again. Database, back up to a scratch 3590 Perform like the following example: tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590 Type=Full' Database, back up to a specific 3590 Perform like the following example: tape in the 3494 'BAckup DB DEVclass=OURLIBR.DEVC_3590 Type=Full VOLumenames=000049 Scratch=No' Database, "compress" See: dsmserv UNLOADDB (TSM 3.7) Database, commercial, back up From time to time we see postings from TSM administrators where, lacking understanding of how such systems work, propose backup of those running systems at the file level. Backing up a database or similar server in a detached, file-oriented manner while that facility is running is a bad idea, in that parts of the database are either in flight or in server memory. Further, the stepwise backup of component files inevitably results in inter-file inconsistency. While the backup may run fine, the probability is high that attempted restoral would result in incoherency, where the server may fail to run with such data or data may be lost or corrupted. The TSM TDPs operate with APIs provided by the vendors of such database server products so that the backup can work in concert with the active server's operations in order to produce a restorable whole. Database, content and compression The TSM Server database, through version 5, has a b-tree organization with internal references to index nodes and siblings. The database grows sequentially from the beginning to end, and pages that are deleted internally are re-used later when new information is added. The only utility that can compress the database so that "gaps" of deleted pages are not present is the database dump/load utility. After extensive database deletions, due to expiration processing or filespace/volume delete processing, pages in the midst of the database space may become free, but pages closer to the beginning or end of the database still allocated. To reduce the size of your database, sufficient free pages must exist at the end of the linear database space that is allocated over your database volumes. A database dump followed by a load will remove free pages from the beginning of the database space to minimize free space fragmentation and may allow the database size to be reduced. Database, convert second primary 'REDuce DB Nmegabytes' volume to volume copy (mirror) 'DELete DBVolume 2ndVolName' 'DEFine DBCopy 1stVolName 2ndVolName' Database, create 'dsmfmt -db /adsm/DB_Name Num_MB' where the final number is the desired size for the database, in megabytes, and is best defined in 4MB units, in that 1 MB more (the LVM Fixed Area, as seen with SHow LVMFA) will be added for overhead if a multiple of 4MB, else more overhead will be added. For example: to allocate a database of 1GB, code "1024": ADSM will make it 1025. Database, defragment See: dsmserv UNLOADDB (TSM 3.7) Database, defragment? You can gauge how much your TSM database is fragmented by doing Query DB and compare the Pct Util against the Maximum Reduction: a "compacted" database with a modest utilization will allow a large reduction, but a "fragmented" one will be much less reducible. Note that while a TSM db can be defragmented via an unload-reload undertaking, that will not necessarily improve performance, as it will compact the database content onto fewer volumes, resulting in congestion and seek contention, while other dbvolumes go empty and wasted. Database, delete table entry See: Backup files, delete; DELRECORD; File, selectively delete from *SM storage Database, designed for integrity The design of the database updating for ADSM uses 2-phase commit, allowing recovery from hardware and power failures with a consistent database. The ADSM Database is composed of 2 types of files, the DB and the LOG, which should be located on separate volumes. Updates to the DB are grouped into transactions (a set of updates). A 2-phase commit scheme works the following way, for the discussion assume we modify DB pages 22, 23: 1) start transaction 2) read 22 from DB and write to LOG 3) update 22' in DB and write 22' to log 4) same as 2), 3) for page 23 5) commit 6) free LOG space Database, empty If you just formatted the database and want to start fresh with ADSM, you need to access ADSM from its console, via SERVER_CONSOLE mode (q.v.). From there you can register administrators, etc., and get started. Database, enlarge You can extend the space which may be used within database "volumes" (actually, files) by using the 'EXTend DB' command. If your existing files are full, you *cannot* extend the files themselves: they are fixed in size. Instead, you have to add a volume (file), as follows: - Create and format the physical file by doing this from AIX: 'dsmfmt -db /adsm/dbext1 100' which will create a 101 MB file, with 1 MB added for overhead (the LVM Fixed Area). - Define the volume (file) to ADSM: 'DEFine DBVolume /adsm/dbext1 The space will now show up in 'Query DBVolume' and 'Query DB', but will not yet be available for use. - Make the space available: 'EXTend DB 100' Note that doing this may automatically trigger a database backup, with message ANR4552I, depending. Database, extend usable space 'EXTend DB N_Megabytes' The extension is a physical operation, so Unix "filesize" resource limit could disrupt the operation. Note that doing this may automatically trigger a database backup, with message ANR4552I, depending. Database, maximum size, architectural Per APAR IC15376, the ADSM database should not exceed 500 GB. Per the TSM 5.1,2,3,4,5 Admin Guide manual: 530 GB Ref: Server Admin Guide, topic Increasing the Size of the Database or Recovery or Manually Increasing the Database or Recovery Log in Notes thereunder. The SHow LVMFA command will also reveal the maximum size (see the reported "Maximum possible DB LP Table size".) See also: Volume, maximum size Database, maximum size, yours Within the architectural size, the practical size of your database may be substantially less. Your size limit is goverened by how much data your TSM server can handle. Basically, if it grows to the point where a day's worth of normal work cannot fit within a day, and you've done all the tuning and hardware modernization that is feasible, then you have to consider splitting your TSM server into a second server. In particular, if Expiration runs so long that it impinges upon client backup periods, then you need to consider the split; and, certainly, if Expiration habitually runs more than 24 hours, you definitely need to address server configuration. Database, mirror See: MIRRORRead LOG Database, mirror, create Define a volume copy via: 'DEFine DBVolume Db_VolName Copy_VolName 'DEFine DBCopy Db_VolName Copy_VolName' Then you can do an 'EXTend DB N_Megabytes' (which will automatically kick off a full database backup). Database, mirror, delete 'DELete DBVolume Db_VolName' (It will be almost instantaneous) Message: ANR2243I Database, move within a system The easiest way to move the TSM database is by volume stepping stones: add a volume, or several volumes, at least as capacious as the contents of current volumes, and then perform DELete DBVolume on the old. TSM will automatically transfer the contents of the dbvolume being deleted to the new space, and thus you effect movement. Database, number of filespace objects See: Objects in database Database, query 'Query DB [Format=Detailed]' Database, rebuild from storage pool No: in a disaster situation, the ADSM tapes? server database *cannot* be rebuilt from the data on the storage pool tapes, because the tape files have meaning only per the database contents. See also: Tape data, recover without TSM Database, reduce by duress Sometimes you have to minimize the size of your database in order to relocate it or the like, but can't Reduce DB sufficiently as it sits. If so, try: - Prune all but the most recent Activity Log entries. - Delete any abandoned or useless filespaces to make room. (Q FI F=D will help you find those which have not seen a backup in many a day, but watch out for those that are just Archive type.) - Delete antique Libvol entries. - If still not enough space, an approach you could possibly use would be to Export and delete any dormant node data, to Import after you have moved the db, to bring that data back. Database, reduce space utilized You can end up with a lot of empty space in your database volumes. If you need to reclaim, you can employ the technique of successively adding a volume to the database and then deleting the oldest volume, until all the original volumes have been treated. This will consolidate the data, and can be done while *SM is up. Note that free space within the database is a good thing, for record expansion. Database, remove volume 'DELete DBVolume Db_VolName' That starts a process to migrate data from the volume being deleted to the remaining volumes. You can monitor the progress of that migration by doing 'q dbv f=d'. Database, reorganize See: dsmserv UNLOADDB (TSM 3.7) Database, space taken per client node This is difficult to determine (and no one really cares, anyway), but here's an approach: The Occupancy info, which provides the number of filespace objects), by type, in primary and copy storage pools. The Admin Guide topic "Estimating and Monitoring Database and Recovery Log Space Requirements" provides numbers for space utilized. The product of the two would yield an approximate number. Database, space taken for files From Admin Guide chapter Managing the Database and Recovery Log, topic Estimating and Monitoring Database and Recovery Log Space Requirements: - Each version of a file that ADSM stores requires about 400 to 600 bytes of database space. (This is an approximation which anticipates average usage. Consider that for Archive files, the Description itself can consume up to 255 chars, or contribute less if not used.) - Each cached or copy storage pool copy of a file requires about 100 to 200 bytes of database space. - Overhead could increase the required space up to an additional 25%. These are worst-case estimations: the aggregation of small files will substantially reduce database requirements. Note that space in the database is used from the bottom, up. Ref: Admin Guide: Estimating and Monitoring Database and Recovery Log Space Requirements. Database, "split" There is no utility for splitting the TSM database, per se; and, certainly, a given TSM server instance can employ only one database. Sites with an unwieldy database size (defined as taking too much of the day to back up) may want to create a second TSM server instance and have that one take some of the load. This is most commonly accomplished simply by having clients start using the second server for data storage, pointing back to the old server only for the restoral of older data, until that all, ultimately expires on the older server. A more cumbersome approach is to employ Export to move nodes to the new server, but few shops go through that Herculean effort. Database, verify and fix errors See: 'DSMSERV AUDITDB' Database allocation on a disk For optimal performance and minimal seek times: - Use the center of a disk for TSM space. This means that the disk arm is never more than half a disk away from the spot it needs to reach to service TSM. - You could then allocate one biggish space straddling the center of the disk; but if you instead make it two spaces which touch at the center of the disk, you gain benefit from TSM's practice of creating one thread per TSM volume, so this way you can have two and thus some parallelism. Database Backup To capture a backup copy of the ADSM database on serial media, via the 'BAckup DB' command. Database backups are not portable across platforms - they were not designed to be so - and include a lot of information that is platform specific: use Export/Import to migrate across platforms. By using the ADSMv3 Virtual Volumes capability, the output may be stored on another ADSM server (electronic vaulting). See also: dsmserv RESTORE DB Database backup, latest Command 'Query DB Format=Detailed" and see line Last Complete Backup Date/Time: Or via Select: SELECT DATE_TIME AS - "DATE TIME ",TYPE, - MAX(BACKUP_SERIES),VOLUME_NAME FROM - VOLHISTORY WHERE TYPE='BACKUPFULL' OR - TYPE='BACKUPINCR' Database backup, query volumes 'Query VOLHistory Type=DBBackup'. The timestamp displayed is when the database backup started, rather than finished. Another method: 'Query DRMedia DBBackup=Yes COPYstgpool=NONE' Note that using Query DRMedia affords you the ability to very selectively retrieve info, and send it to a file, even from a server script. Database backup, delete all See: DELete VOLHistory Database backup, Full needed when? - The most recent Full backup has gone away. - 32 Incrementals have been done. - Switching the Recovery Log between roll-forward and normal mode. - When space triggers are defined, and a new database volume was added. Database backup, TSM v.6+ In this era of TSM, the database technology is DB2. The backup of that database consists of a full dump of the database and an archive of the recovery log. On tape, these are written on one tape, on FILE, there is no reason to cram it in one volume, so two volumes are created. Database backup in progress? Do 'Query DB Format=Detailed' and look at "Backup in Progress?". Database backup trigger, define See: DEFine DBBackuptrigger Database backup trigger, query 'Query DBBackuptrigger [Format=Detailed]' Database backup triggering causes You may find TSM spontaneously backing up its database. Some possible reasons: - The DBBackuptrigger setting. - An UpgradeDB of any kind causes TSM to set a hard trigger to emphatically want to create a full backup of its database. Database backup volume Do 'Query VOLHistory Type=DBBackup', if the ADSM server is up, or 'Query OPTions' and look for "VolumeHistory". If *SM is down, you can find that information in the file specified on the "VOLUMEHistory" definition in the server options file (dsmserv.opt). See "DSMSERV DISPlay DBBackupvolumes" for displaying information about specific volumes when the volume history file is unavailable. See "DSMSERV RESTORE DB Preview=Yes" for displaying a list of the volumes needed to restore the database to its most current state. Database backup volume, pruning If you do not have DRM: Use 'DELete VOLHistory TODate=SomeDate TOTime=SomeTime Type=DBBackup' to manage the number of database backups to keep. If you have DRM: 'Set DRMDBBackupexpiredays __' Database backup volumes, identifying Seek "BACKUPFULL" or "BACKUPINCR" in the current volume history backup file - a handy way to find them, without having to go into ADSM. Or perform server query: select volume_name from volhistory - where (upper(type)='BACKUPFULL' or - upper(type)='BACKUPINCR') Database backup volumes, identifying Unfortunately, when a 'DELete historical VOLHistory' is performed the volsers of the deleted volumes are not noted. But you can get them two other ways: 1. Have an operating system job capture the volsers of the BACKUPFULL, BACKUPINCR volumes contained in the volume history backup file (named in the server VOLUMEHistory option) before and after the db backup, then compare. 2. Do 'Query ACtlog BEGINDate=-N MSGno=1361' to pick up the historical volsers of the db backup volumes at backup completion to check against those no longer in the volume history. Database backup volumes, return to IBM Technotes: 1250669; 1115957 scratch Database backups (Oracle, etc.) Done with TSM via the Tivoli Data Protection (TDP) products. See: TDP See also: Adsmpipe Database buffer pool size, define "BUFPoolsize" definition in the server options file. Database buffer pool statistics, reset 'RESet BUFPool' Database change statistics since last 'Query DB Format=Detailed' backup Database consumption factors - All the administrative definitions are here; elminate what is no longer needed. - The Activity Log is contained in the database: control amount retained via 'Set ACTlogretention N_Days'. The Activity Log also logs administrator commands, Events, client session summary statistics, etc., which you may want to limit. - The Summary table can consume a lot of space (and be so large as to delay queries for current data). See Set SUMmaryretention for more info. - Volume history entries consume some space: eliminate what's obsolete via 'DELete VOLHistory'. - The database is at the mercy of client nodes or their filespaces being abandoned, and client file systems and disks being renamed such that obsolete filespaces consume space. - More than anything, the number of files cataloged in the database consume the most space, and your Copy Group retention policies govern the amount kept. Nodes which have a sudden growth in file system files will inflate the db via Backup. Perform Query OCCupancy and look for gluttons. See: Many Small Files challenge - Verify that retention policy values that you think are in effect actually are. You may have overlooked doing an ACTivate POlicyset. - Restartable Restores consume space in that the server is maintaining state information in the database (the SQL RESTORE table). Generally control via server option RESTOREINTERVAL, and reclaim space from specific restartable restores via the server command CANCEL RESTORE. Also, during such a restore the server will need extra database space to sort filenames in its goal to minimize tape mounts during the restoral, and so there will be that surge in usage. - Complex SELECT operations will require extra database space to work the operation. - When you Archive a file, the directory containing it is also archived. When the -DEScription="..." option is used, to render the archived file unique, it also causes the archived directory to be rendered unique, and so you end up with an unexpectedly large number of directories in the *SM database, even though they are all effectively duplicates in terms of path. When users eventually clean up what they archived, they almost always delete just the files, not realizing that the directories remain behind. This results in an amazing build-up of crud in your database, which is a pain to discern in server queries, as it won't be reflected in Query OCCupancy or Query AUDITOccupancy reports in the case of Unix filespaces. - The size of the Aggregate in Small Files Aggregation is also a factor: the more small files in an aggregate, the lower the overhead in database cataloging. As the 3.1 Technical Guide puts it, "The database entries for a logical file within an aggregate are less than entries for a single physical file." See: Aggregate - Make sure that clients are not running Selective backups or Archives on their file systems (i.e., full backups) routinely instead of Incremental backups, as that will rapidly inflate the database. Likewise, be very careful of coding MODE=ABSolute in your Copy Group definitions. - Talk to client administrators about excluding useless files from backup, like temp directories and web browser cache files. - Make sure that 'EXPIre Inventory' is being run regularly - and that it gets to run to completion. Note that API-based clients, such as the TDP series and HSM, require their own, separate expiration handling: failing to do that will result in data endlessly piling up in the storage pools and database. - Not using the DIRMc option can result in directories being needlessly retained after their files have expired, in that the default is for directories to bind to the management class with the longest retention period (RETOnly). - Realize that long-lived data that was stored in the server without aggregation will be output from reclamation likewise unaggregated, thus using more database space than if it were aggregated. (See: Reclamation) - With the Lotus Notes Agent, *SM is cataloging every document in the Notes database (.NSF file). - Beware the debris left around from the use of DEFine CLIENTAction (q.v.). In particular, do Query SCHedule and look for a build-up of transient schedules. - Windows System Objects are large and consist of thousands of files. - Wholesale changes of ACLs (Access Control Lists) in a file system may cause all the files to be backed up afresh. - Daylight Savings Time transitions can cause defective TSM software to back up every file. - Use of DISK devclass volumes will consume a lot more db space than sequential volumes, as TSM has to track every disk block. (See Admin Guide table "Comparing Random Access and Sequential Access Disk Devices".) - And, loss of space can be relative: If for some reason one of your DB volumes dropped out of the collective, that's a space issue unto itself. Do Query DBVolume to check. In that the common cause of db growth is file deluge from a client node, simple ways to inspect are: produce a summary of recent *SM accounting records; harvest session-end ANE* records from the Activity Log; and to do a Query Content with a negative count value on recently written storage pool tapes. (Ideally, you should be running accounting record summaries on a regular basis as a part of system management.) Database entry size The size of an entry in the TSM database is 400 - 600 bytes in size, says 2007 IBM Technote 1239154. Database file It is named within server directory file dsmserv.dsk . (See "dsmserv.dsk".) Database file backups Questins perenially arise as some folks tr