Patch Name: PHSS_28950 Patch Description: s700_800 11.X OV ITO7.1X Msg/Act Tru64 Agent A.07.20 Creation Date: 03/05/28 Post Date: 03/05/29 Repost: 03/06/18 The Equivalent Patches section of the patch documentation was modified to clarify that ITOSOL_00221 is the equivalent patch for Solaris. Hardware Platforms - OS Releases: s700: 11.00 11.11 s800: 11.00 11.11 Products: OpenView Operations 7.1 Filesets: OVOPC-CLT.OVOPC-OSF-CLT,fr=A.07.10,fa=HP-UX_B.11.00_32/64,v=HP OVOPC-CLT.OVOPC-OSF-CLT,fr=A.07.10,fa=HP-UX_B.11.11_32/64,v=HP Automatic Reboot?: No Status: General Release Critical: No Category Tags: defect_repair general_release Path Name: /hp-ux_patches/s700_800/11.X/PHSS_28950 Symptoms: PHSS_28950: - SR: H555008529 If a process dies immediately after being started by the Control Agent, it is possible that OpC30-1094 messages start appearing in the error logfile. - SR: H555008582 LANG value in start-up script is not set correctly for Tru64, AIX, Solaris and Linux agent - SR: H555009052 Added support for tru64 cluster. In order to add this support agent location was changed. Because of this it is necessary to deinstall all agent software from all managed nodes, before installing new one. More information about Tru64 managed nodes is available in /opt/OV/ReleaseNotes/OVOPC-OSF-CLT.pdf. The file is located on the management server after successful installation of this patch. - SR: H555009075 On Tru64 opcswitchuser.sh fails to switch to user root with the following message: user 'root' does exist group 'system' does not exist - SR: H555009405 When running OVO agent binaries message "Unaligned access..." could appear. - SR: H555009574 opcfwtmp monitor tool does not work as expected on tru64 5.1A systems. - SR: H555010122 Agents can not get installed on 5.1A node where uname -a returns P5.1. - SR: 8606232431 VPO tries to resolve node names that only contain blanks because of typo in template definition or variable assignment. This leads to a lot of unnecessary DNS traffic. - SR: B555008674 The opcagt and the opcragt commands have a new option '-version'. In both man pages, this was not documented. The new option was not part of the usage strings of opcagt and opcragt. - SR: B555014851 opcmsga sends the same message operation (e.g. an acknowledge request created by opcmack(1) ) again and again if the related message is not in the cache and one of the target managers can not be reached. - SR: B555014942 The opcle process loops if a logfile is removed while it is read. - SR: B555015325 opcmsga wrongly generates a message OpC30-3002 'opcctla not running on node localhost.' when receiving a reconfigure signal, or at exit time. - SR: B555015331 The monitor agent, opcmona, may report wrong results of executed monitor scripts or programs when using many 'advanced monitors' such as OVPERF. In some of these cases opcmona might even abort. - SR: B555015349 The monitor agent might abort while using templates/policies that have an external source specified. - SR: B555015449 Monitor agent may abort while receiving templates/policies. - SR: B555015479 When executing opcagt -kill there sometimes still remains an entry for the Control Agent (opcctla) in the pids file. This should not happen because the opcctla isn't running anymore. - SR: B555015496 Queue file handling was inefficient for larger queue files. - SR: B555015557 The current implementation of the control agent does not allow to restart aborted sub agent processes automatically. - SR: B555015626 opcif_read() doesn't return data if the signal pipe is empty, but there is still more data in the queue file. This can for example happen if the maximum pipe size of 8192 bytes was reached and therefore no more signal bytes could be written into the signal pipe. - SR: B555015758 opcmsgi aborts if one of the set attributes has an unmatched '<'. - SR: R555018044 The monitor scheduled command policy will be executed only once. - SR: R555019153 When deploying policies from a Windows management server and using the "StoreCollection" method within the Perl or VBScript, the monitor agent might show the Policy name instead of the given metric name or may abort. - SR: B552002462 After deploying the Exchange SPI and running for 4 days, the OpenView Reporter reports are empty since the data is not available from Coda. Additionally, the system may be slow and low on virtual memory. - SR: 8606293812 Some GBL_NET_* metrics are not logged on the 5.X OS versions. - SR: B555015875 Metrics can't be added to a DDF object after data has been submitted. - SR: R555017222 64 bits counters that are populated from 32 bit fields will produce negative numbers when the 32 bit field wraps. This problem was seen for BYNETIF_IN_BYTE on HP- UX, but is applicable to all platforms. - SR: R555019316 Support and customers were not able to easily determine whether or not the SPI data was being logged by Coda and needed a tool to provide this functionality. - SR: R555019330 GBL_MEM_PHYS was reported in KBytes. - SR: R555020973 Making a partial copy of an instanceList is very slow. - SR: R555020974 Coda consumes too much memory when processing data requests. - SR: R555021933 Coda fails with a floating point exception on some Tru64 systems. - SR: R555021935 Coda fails with a floating point exception on some Tru64 systems. - SR: B555012657 llbserver is now enhanced. It is possible to print out a list of applications registered with llbserver by running 'llbserver -query'. With 'llbserver -ping' one can check if llbserver is running. - SR: B555013635 Many instances of llbserver can run in parallel on the same port. PHSS_27543: - SR: H555006719 If the agent is running as a non-root user and the management server processes are restarted, the agent does not resume sending messages but continues to buffer the messages. - SR: H555008275 Message Agent can hang for no apparent reason and stop sending all messages to the Management Server regardless of its state. - SR: H555008602 If setting OPC_RPC_ONLY to TRUE in opcinfo, after a while the message agent core dumps. - SR: H555008631 Customer receives a lot of OpC20-61 and OpC20-63 messages in the error logfile when using NCS agents. - SR: H555008866 Agent is not running in TruCluster environment - SR: 8606187183 After deploy/undeploy of opcmsg policies/templates the suppressing times are lost. Messages that should be suppressed after a deploy/undeploy of policies/templates are shown. - SR: 8606221769 Compaq Tru64 UNIX 5.1A is not supported with VPO agent. - SR: 8606242614 Messages are incorrectly suppressed by the logfile encapsulator if "suppress identical output messages" is specified and the messages differ only in the values of <$LOGFILE> and/or <$LOGPATH>. - SR: 8606262299 The logfile encapsulator reports that the File to be executed for preprocessing of a logfile template failed. This error occurs randomly and only from time to time. You will get an error message similar to the following: Command 'opcfwtmp /tmp/wtmp.stat /var/adm/wtmp /tmp/wtmp.out' configured in source 'Logins (10.x/11.x HP-UX)' returns 1. Ignoring this logfile. (OpC30-107) - SR: 8606282247 Variable replacement is now performed for all Message Defaults fields. - SR: B555009284 The authorization verification for remote start and stop requests of the agent was sometimes unreliable. Possibly this allowed more OVO servers to start or stop the agent, than specified in the MoM configuration. - SR: B555013548 The manual agent installation script opc_inst expects compressed packages. So if you run it a second time nothing happens because the packages are already uncompressed. - SR: B555013719 Message agent doesn't stop message buffering when the management server is available again after a network outage, fixed DNS problem or similar. This can happen when the agent restarts/the machine reboots during the network problem occurred. - SR: B555013891 In MoM environments, opcmsga does not return action responses to SECONDARY managers, if their name is not resolvable. - SR: B555014093 opcmona may crash (UNIX) or doesn't process all SCHEDULE templates (Windows) when using SCHEDULE templates. - SR: B555014132 During a distribution the agent may report an error like: ITO responsible manager configuration. (OpC30-1203) Cannot open file \usr\OV\tmp\OpC\cfgchg. System Error Number: 13 (d) - The data is invalid. (OpC20-63) - SR: B555014574 opcagt -start/-stop/-status doesn't work correctly, if the currently running agent can't be reached over RPC. - SR: B555014591 When the OPC_INT_MSG_FLT is set to TRUE then the filtered message is received corrupted on the server in a Japanese environment. - SR: B555014667 The first lines of a logfile are not forwarded to the message browser when using a command to discover logfiles and the logfile was created after the first polling interval. - SR: B555014715 The Control Agent slowly grows in memory usage. - SR: B555014759 When enabling/disabling policies you might discover a memory leak in the agent processes. - SR: B553003927 opcmsgi leaks memory when handling messages with custom message attributes. - SR: R555017956 The monitor agent is terminated if you use a session variable within the message text of a template followed by any other variable (for example $INSTANCE). - SR: R555018043 Japanese characters in Perl scripts within templates don't get converted correctly into the server code set. If the Japanese characters are used as message text for example these characters are corrupted. Defect Description: PHSS_28950: - SR: H555008529 This is a timing issue, where internal structures are not updated by the signal handler in time for proper values to be written in the PIDS file. An additional check for process presence has been implemented before writing the PIDS file. - SR: H555008582 Agent has used system value for LANG setting in start-up script and problem was when this setting was not the same as LANG setting in database. Now installation always checks node settings in database. - SR: H555009075 Command: 'id [user]' on DEC returns output in format: uid=n([user]) gid=m([ef_group]) groups=[other Groups]. The problem is that effective group for user 'root' on DEC is 'daemon' and group 'system' is listed among the rest of the Groups. Fix: Command: 'id -G -n [user]' displays all groups ids (real, effective, and supplementary) for specific user. With that all possible groups for a user are being considered. - SR: H555009574 Format of /var/adm/wtmp logfile has changed on tru64 unix 5.1A. Because of this, opcfwtmp monitor tool does not process this logfile corectly on 5.1A systems. There are now two binaries, one for each format of /var/adm/wtmp - SR: H555010122 Tru64 P5.1 OS is patched version of 5.1A tru64 OS. Agents can not be installed on such node because installation scripts does not recognize this OS as supported, because uname -a returns P5.1 instead of V5.1. Installation scripts are now updated so that this version is also recognized as supported tru64 OS. - SR: 8606232431 VPO now ignores node names that contain only white space characters without contacting the name service. - SR: B555008674 The man pages for opcagt and opcragt now document the new option '-version'. The message catalog was updated to show the '-version' option in the usage string of the opcagt and opcragt commands. - SR: B555014851 opcmsga maintains an internal cache to find out the target managers per message ID. The cache expires after 1 hour (can be changed with the opcsvinfo variable OPC_STORE_TIME_FOR_MGR_INFO) and then it runs into a problem in its algorithm so that a message operation on a non-cached message is sent again and again until the last target manager in an internal list can be reached. - SR: B555015325 During startup a timing issue prevented a connection from opcmsga to opcctla, which generated the message. This message was not immediately reported, but only after receiving a signal. Retries will now prevent the connection failure. If it still fails because opcctla is really not running, the error will be reported immediately. - SR: B555015331 opcmona holds a central table for all subprocess related information. Advanced monitors are executed in separate threads and could access this table in parallel, thus overwriting each other's data. The table accesses are now serialized by a mutex. - SR: B555015349 The monitor agent aborts when using templates/policies with external sources as soon as it receives a value from the external source during checking for the threshold with a previously received value. In this case the received value is stored temporary and as soon as the monitor agent tries to process this value it aborts. The root cause is that the monitor agent tries to free up already freed memory. - SR: B555015496 So far a queue file garbage collection was done, if more than 256 Kbyte were unused. Now the unused space has to be more than 256 Kbyte and more than a quarter of the queue file size. This drastically reduces file I/O when handling large queue files. - SR: B555015557 The control agent has been changed in order to make the restart of an aborted sub agent process configurable. The control agent can be configured to restart aborted sub agent processes. Furthermore it can be defined how often a process should be restarted in a certain time interval. To configure this, you can use the following variables in the opcinfo file: OPC_RESTART_SUBAGENT If set to TRUE, the control agent tries to restart aborted sub agent processes. The restart is done a defined number of times (OPC_RESTART_COUNT) in a specified period of time (OPC_RESTART_MINIMUM_RUN_TIME). In case the process aborts more often, it wont be restarted again. Type/Unit : TRUE|FALSE Default : TRUE OPC_RESTART_COUNT Defines how often an aborted sub agent process should be restarted within the specified minimum runtime. In case a process stops more often it wont be restarted. (See OPC_RESTART_SUBAGENT) Type/Unit : integer Default : 5 OPC_RESTART_DELAY Defines the time the control agent waits before it restarts an aborted sub agent process. The time is specified in seconds. Type/Unit : integer Default : 10 OPC_RESTART_MINIMUM_RUN_TIME Defines the time frame a sub agent process should run without being restarted more than specified by OPC_RESTART_COUNT. The time is specified in minutes. Type/Unit : integer (minutes) Default : 60 - SR: R555018044 Internal flag is restored after monitor schedule policy execution. - SR: B552002462 The errors in the Exchange SPI Mailbox Reports were due to improper handling of the DDF data in Coda. This has since been corrected so that only the valid data is returned to Reporter. - SR: 8606293812 The network bytes_in and bytes_out metrics show zero values when collected from OSF1 v5.0(not 5.0A) and v5.1. On OSF1 v4.0, good data is obtained. - SR: B555015875 New metrics can now be added to an existing DDF object after data has been submitted. - SR: R555017222 Coda now handles a 32 bit wrap in a 64 bit field, thereby showing the correct values for any affected metric. - SR: R555019316 The codautil tool has been enhanced to allow any user to view the object model (through the use of the -obj option) for all of the SPIs that that have been deployed and are running on their nodes. In addition, the -dumpds option (the datasource name is required) will report all of the data for the specified datasource for the last logged interval. The command formats are: - SR: R555019330 GBL_MEM_PHYS is now reported in MBytes. - SR: R555020973 Added attributes to keep track of the last instance visited in the instance list. These attributes are used to speed sequential access to list elements. - SR: R555020974 Extensive modifications to minimize memory used to store dataMatrix on server and client. - SR: R555021933 Coda will now run on OSF1 Tru64 systems with 8 GB or more of memory. - SR: R555021935 Coda will now run on OSF1 Tru64 systems with 8 GB or more of memory. - SR: B555012657 llbserver is now enhanced. It is possible to print out a list of applications registered with llbserver by running 'llbserver -query'. With 'llbserver -ping' one can check if llbserver is running. - SR: B555013635 Many instances of llbserver can run in parallel on the same port. Now llbserver checks whether another instance is already running on configured port. for SR's not listed in this section please see the list of symptoms PHSS_27543: - SR: H555006719 When a communication to a message receiver fails, the message agent starts buffering messages. It periodically checks if a server is alive by sending it ICMP packets. If the server cannot be reached with ICMP packets, no RPC communication is attempted. Sending ICMP packets is not possible when the agent is running as a non-root user, so the sending function cannot actually send anything. Therefore we also never receive any replies and the message agent will buffer messages forever. To fix this, the internal state of the message agent is updated after we tried to send an ICMP packet if the agent is running as a non-root user. - SR: H555008275 Signal handler for SIGIO was installed before the socket on which we receive ICMP replies was set to non-blocking mode. An unsolicited SIGIO would trigger the signal handler which would wait indefinitely on the socket for data, which would never arrive. Since NCS agent is single-threaded, all communication would stop. The fix is in setting the non-blocking mode before installing the signal handler, so it would not wait forever. - SR: H555008602 When using OPC_RPC_ONLY, ICMP handling is not initialized, but the message agent will call opc_pb_ping_reset() after a successful server checkalive cycle. This causes an invalid (NULL) pointer to be dereferenced and causes a core dump. opc_pb_ping_reset() now has a check to see if ICMP handling has been initialized and if not, immediately returns from the function. - SR: H555008631 NCS agent open() and stat() calls did not handle EINTR, so a check/loop was implemented to handle it. - SR: H555008866 Added support for tru64 cluster. In order to add this support agent location was changed. Because of this it is necessary to deinstall all agent software from all managed nodes, before installing new one. More information about Tru64 managed nodes is available in /opt/OV/ReleaseNotes/OVOPC-OSF-CLT.pdf. The file is located on the management server after successful installation of this patch. - SR: 8606187183 The opcmsg interceptor restarts after a deploy/undeploy of policies/templates. During this process all the policy/template information is cleaned and read again from a temporary file. Because suppressing times are not stored in this temporary file, this times are lost. Now the suppressing times are taken over to the new data. - SR: 8606221769 Added support for Compaq Tru64 UNIX 5.1A with VPO agent. Both NCS and DCE communication types are supported. DCE 4.2 is required in order to use DCE communication type. - SR: 8606242614 The variables <$LOGFILE> and <$LOGPATH> were replaced after the suppression rules were evaluated. Therefore the comparison did not use the actual logfile name or path, but compared the string "<$LOGFILE>" or "<$LOGPATH>". - SR: 8606282247 Variable replacement is now performed for all Message Defaults fields. - SR: B555009284 There were two different authorization algorithms which interfered with each other. This has been consolidated and is now checked more strictly. - SR: B555013719 Message agent remains in buffering mode even when the management server is available again. The reason is that the agent wasn't able to resolve the management server name to an IP address at startup and the agent doesn't try again during runtime. This has been fixed by checking for a resolvable name every time a message should be buffered till the name can be resolved, after this the normal checkalive mechanism which handles buffered message takes place. - SR: B555013891 Even if the IP address of the management server was specified in the mgrconf file, it was not used except for the primary manager. This behavior was changed to give the mgrconf file precedence over name resolution. - SR: B555014093 opcmona may crash (UNIX) or doesn't process all SCHEDULE templates (Windows) when using SCHEDULE templates. This can occur when there are only spaces in one of the schedule fields (Minute, Hour, Day of the Month, Month, Year, Day of the Week). You can verify this by going to the conf/OpC directory on the node and doing an opcdcode monitor. When there are entries like WEEKDAY " " the problem can occur. Now the monitor agent treats sequences of spaces like an empty string that is a wildcard and uses all valid values in the possible range. For WEEKDAY this is 0-6. - SR: B555014132 During a distribution the agent may report error number OPC30-1203/OPC20-63 when trying to access the cfgchg file. The cause for this problem is that there are several processes trying to get exclusive access to this file at the same time. The problem has been fixed by doing a retry for 10 times in case the error should occur with a delay of one second. - SR: B555014574 With this changes the opcctla is now be able to deal with a running opcctla that is not reachable via RPC: opcagt -status will display a warning if the currently running opcctla is not reachable over RPC, but then it will display the status according to the pids file. opcagt -stop will also kill the unresponsive opcctla and try to start a new one. If opcctla is not reachable over RPC, opcagt -start will kill all running agent processes and then start a new opcctla which starts the agent processes. But of course, the agent won't be able to start if RPC is still not available at that time. - SR: B555014591 The defect was caused by the double conversion from the server code set to the internal code set, once on the agents side when it sent the internal message to opcmsga and once by opcmsgi when it forwarded the message again. Now, the message is converted back from the internal code set to the server code set in opcmsga before sending the message to the opcmsgi queue. The management server will get the message through opcmsga in the internal code set, and it will convert it into the server code set. The conversion is made only if the internal code set is different from the server code set. for SR's not listed in this section please see the list of symptoms Enhancement: No SR: R555021935 R555021933 R555020974 R555020973 R555019330 R555019316 R555019153 R555018044 R555018043 R555017956 R555017222 H555010122 H555009574 H555009405 H555009075 H555009052 H555008866 H555008631 H555008602 H555008582 H555008529 H555008275 H555006719 B555016326 B555015875 B555015758 B555015626 B555015557 B555015496 B555015479 B555015449 B555015349 B555015331 B555015325 B555014942 B555014851 B555014759 B555014715 B555014667 B555014591 B555014574 B555014132 B555014093 B555013891 B555013719 B555013635 B555013548 B555012657 B555009284 B555008674 B553003927 B552002462 8606293812 8606282247 8606262299 8606242614 8606232431 8606221769 8606187183 Patch Files: OVOPC-CLT.OVOPC-OSF-CLT,fr=A.07.10,fa=HP-UX_B.11.00_32/64, v=HP: OVOPC-CLT.OVOPC-OSF-CLT,fr=A.07.10,fa=HP-UX_B.11.11_32/64, v=HP: /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/opc_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/comm_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/perf_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/opc_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/comm_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/perf_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/opc_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/comm_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/perf_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/opc_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/comm_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/perf_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/opc_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/comm_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/perf_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/opc_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/comm_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/perf_version /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/opc_inst /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/opc_inst /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/opc_inst /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/install/opcrinst /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/install/opcrinst /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrinst /opt/OV/bin/OpC/install/cluster_deploy.sh /opt/OV/ReleaseNotes/OVOPC-OSF-CLT.pdf /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/AgentPlatform /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/AgentPlatform /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrclchk /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrdschk /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrndchk /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcroschk /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrverchk /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcnsl /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/ana_disk.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/cpu_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/disk_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/proc_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/sh_procs.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/swap_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/vp_chk.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/mailq_l.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/st_inetd.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/st_syslogd.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/st_mail.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/mailq_pr.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opc_sec_v.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opcdf.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opclpst.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opcps.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/opcfwtmp.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/monitor/opcfwtmp4.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/monitor/opcfwtmp5.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/monitor/opcfwtmp.Z what(1) Output: OVOPC-CLT.OVOPC-OSF-CLT,fr=A.07.10,fa=HP-UX_B.11.00_32/64, v=HP: /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/opc_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/comm_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/perf_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/opc_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/comm_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/perf_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/opc_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/comm_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/perf_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/opc_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/comm_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/perf_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/opc_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/comm_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/perf_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/opc_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/comm_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/perf_version: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/opc_inst: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/opc_inst: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/opc_inst: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_DCE_TCP/install/opcrinst: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/install/opcrinst: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrinst: HP OpenView Operations A.07.20 (04/16/03) /opt/OV/bin/OpC/install/cluster_deploy.sh: None /opt/OV/ReleaseNotes/OVOPC-OSF-CLT.pdf: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/AgentPlatform: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/AgentPlatform: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrclchk: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrdschk: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrndchk: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcroschk: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcrverchk: HP OpenView Operations A.07.20 (04/16/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/install/opcnsl: HP OpenView Operations A.07.20 (04/16/03) $RCSfile: crt0.s,v $ $Revision: 1.1.26.2 $ (DEC) $Da te: 1997/04/03 22:25:40 $ /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/ana_disk.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/cpu_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/disk_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/proc_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/sh_procs.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/swap_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/vp_chk.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/mailq_l.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/st_inetd.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/ st_syslogd.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/st_mail.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/actions/mailq_pr.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opc_sec_v.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opcdf.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opclpst.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/cmds/opcps.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix51A+/A.07.10/RPC_DCE_TCP/monitor/opcfwtmp.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/monitor/opcfwtmp4.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/monitor/opcfwtmp5.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/dec/alpha/ unix/A.07.10/RPC_NCS/monitor/opcfwtmp.Z: None cksum(1) Output: OVOPC-CLT.OVOPC-OSF-CLT,fr=A.07.10,fa=HP-UX_B.11.00_32/64, v=HP: 158182020 6671058 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/opc_pkg.Z 1703775367 2121665 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/comm_pkg.Z 583971370 1023559 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/perf_pkg.Z 1362778813 6818074 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/opc_pkg.Z 1703775367 2121665 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/comm_pkg.Z 583971370 1023559 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/perf_pkg.Z 697250300 6996409 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ opc_pkg.Z 1703775367 2121665 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ comm_pkg.Z 583971370 1023559 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ perf_pkg.Z 4030162770 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/ opc_version 2717782698 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/ comm_version 2533018535 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/ perf_version 4030162770 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/opc_version 2717782698 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/comm_version 2533018535 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/perf_version 4030162770 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ opc_version 2717782698 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ comm_version 2533018535 8 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ perf_version 2161148964 2228 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/opc_inst 2161148964 2228 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/opc_inst 2161148964 2228 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ opc_inst 3121997898 117926 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_DCE_TCP/install/ opcrinst 1739698163 119257 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/install/ opcrinst 3121997898 117926 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcrinst 79966741 611 /opt/OV/bin/OpC/install/cluster_deploy.sh 1831028742 200847 /opt/OV/ReleaseNotes/OVOPC-OSF-CLT.pdf 760691590 3902 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/AgentPlatform 3341964781 3738 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/AgentPlatform 1706698175 27594 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcrclchk 4160381535 29174 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcrdschk 1223514701 6782 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcrndchk 140878922 6014 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcroschk 2427246546 28382 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcrverchk 1484995777 16528 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ install/opcnsl 1279845957 2785 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/ana_disk.sh.Z 167748528 6071 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/cpu_mon.sh.Z 3470587861 6156 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/disk_mon.sh.Z 810668775 5706 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/proc_mon.sh.Z 2449881583 5415 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/sh_procs.sh.Z 2078923664 5749 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/swap_mon.sh.Z 3097575168 5671 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/vp_chk.sh.Z 314917733 5840 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/mailq_l.sh.Z 39916066 2567 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ actions/st_inetd.sh.Z 1676668619 2578 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ actions/st_syslogd.sh.Z 2235767416 2573 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ actions/st_mail.sh.Z 1412889581 2516 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ actions/mailq_pr.sh.Z 3836794521 13219 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/cmds/ opc_sec_v.sh.Z 2750492132 313 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/cmds/ opcdf.Z 3937925163 378 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/cmds/ opclpst.Z 821063059 389 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/cmds/ opcps.Z 3465694573 6268 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix51A+/A.07.10/RPC_DCE_TCP/ monitor/opcfwtmp.Z 3656017504 5431 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/monitor/ opcfwtmp4.Z 1299706058 5460 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/monitor/ opcfwtmp5.Z 775725465 5628 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/dec/alpha/unix/A.07.10/RPC_NCS/monitor/ opcfwtmp.Z Patch Conflicts: None Patch Dependencies: None Hardware Dependencies: None Other Dependencies: None Supersedes: PHSS_27543 Equivalent Patches: ITOSOL_00221: sparcSOL: 2.7 2.8 Patch Package Size: 28230 KBytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHSS_28950 5. Run swinstall to install the patch: swinstall -x autoreboot=true -x patch_match_target=true \ -s /tmp/PHSS_28950.depot By default swinstall will archive the original software in /var/adm/sw/save/PHSS_28950. If you do not wish to retain a copy of the original software, include the patch_save_files option in the swinstall command above: -x patch_save_files=false WARNING: If patch_save_files is false when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. For future reference, the contents of the PHSS_28950.text file is available in the product readme: swlist -l product -a readme -d @ /tmp/PHSS_28950.depot To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHSS_28950.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: BEFORE LOADING THIS PATCH... (A) Patch Installation Instructions ------------------------------- (A1) Install the patch, following the standard installation instructions. For backing up the system before installing a patch, you may use opc_backup(1m) NOTE: MAKE SURE THAT NO AGENT OF THE PLATFORM ADDRESSED BY THIS PATCH IS DISTRIBUTED (either from the VPO Administrator's GUI or from command line using inst.sh) WHILE RUNNING SWINSTALL. NOTE: This patch must be installed on the VPO Management Server system, NOT on an VPO Managed Node directly. Changes will take effect on managed nodes by means of VPO Software Distribution (using 'Force Update' if there is already an agent installed on the managed node). See chapter 2 of the VPO Administrator's Reference manual for more information. NOTE: The VPO Agent consists of several components that are patched individually. This patch updates only the Event/Action component. Therefore the software distribution will tell, that the agent software is updated to A.07.10, not to A.07.12. Also the version of the ITOAgent bundle will still be A.07.10. You can verify the installed version of the components on the agent system using opcragt -agent_version. For example: opcragt -agent_version hpbbln8 Node hpbbln8.bbn.hp.com: OPC_INSTALLED_VERSION = A.07.12 PERF_INSTALLED_VERSION = A.07.10 COMM_INSTALLED_VERSION = 2.5.3.9 Done. NOTE: Before installing new agents on any tru64 node, make sure that old agents are deinstalled from the managed node. Also make sure that after old agents are deinstalled, neither /usr/opt/OV nor var/opt/OV directories exists on the managed node. These two directories must be removed prior to the new agent installation. (B) Patch Deinstallation Instructions --------------------------------- (B1) To deinstall the patch PHSS_28950 run swremove: NOTE: MAKE SURE THAT NO AGENT OF THE PLATFORM ADDRESSED BY THIS PATCH IS DISTRIBUTED (either from the ITO Administrator's GUI or from command line using inst.sh) WHILE RUNNING SWREMOVE. # swremove PHSS_28950