Patch Name: PHSS_28781 Patch Description: s700_800 11.00 OV ITO6.0X HP-UX 10.x Agent Patch A.06.15 Creation Date: 03/07/23 Post Date: 03/07/28 Hardware Platforms - OS Releases: s700: 11.00 s800: 11.00 Products: OpenView IT/Operations 6.0 Filesets: OVOPC-CLT.OVOPC-UX10-CLT,fr=A.06.00,fa=HP-UX_B.11.00_32/64,v=HP Automatic Reboot?: No Status: General Release Critical: No Category Tags: defect_repair general_release Path Name: /hp-ux_patches/s700_800/11.X/PHSS_28781 Symptoms: PHSS_28781: - SR: H555007666 mailq_l.sh does not count mails correctly if a sendmail version 10 or above is used. - SR: H555008275 Message Agent can hang for no apparent reason and stop sending all messages to the Management Server regardless of its state. - SR: H555008529 If a process dies immediately after being started by the Control Agent, it is possible that OpC30-1094 messages start appearing in the error logfile. - SR: H555008553 - itochecker was able to collect only information from the management server. There was no possibility to collect information from the managed nodes. Checking the kernel parameters did not return the correct values by itochecker. No information about opcerror was gathered by itochecker. - SR: H555008602 If setting OPC_RPC_ONLY to TRUE in opcinfo, after a while the message agent core dumps. - SR: 8606187183 After deploy/undeploy of opcmsg policies/templates the suppressing times are lost. Messages that should be suppressed after a deploy/undeploy of policies/templates are shown. - SR: 8606189243 Utility for checking basic types of communication between OVO server and agents is missing. - SR: 8606232431 VPO tries to resolve node name that only contains only blanks because of typo in template definition or variable assignment. - SR: 8606233602 If using a pattern like '<*.prefix>ERR<*.suffix>', the prefix variable will get assigned a wrong text if it should be empty. - SR: 8606242614 Messages are incorrectly suppressed by the logfile encapsulator if "suppress identical output messages" is specified and the messages differ only in the values of <$LOGFILE> and/or <$LOGPATH>. - SR: 8606244523 When using the syntax <`script`> in the logfile template and the script returns the same logfile name twice, the opcle will abort. - SR: 8606262299 The logfile encapsulator reports that the File to be executed for preprocessing of a logfile template failed. This error occurs randomly and only from time to time. You will get an error message similar to the following: Command 'opcfwtmp /tmp/wtmp.stat /var/adm/wtmp /tmp/wtmp.out' configured in source 'Logins (10.x/11.x HP-UX)' returns 1. Ignoring this logfile. (OpC30-107) - SR: B555008674 The opcagt and the opcragt commands have a new option '-version'. In both man pages, this was not documented. The new option was not part of the usage strings of opcagt and opcragt. - SR: B555013620 Support for pmd's "u" option needed in opctrapi: use the UDP packet's address as source of the trap. - SR: B555013719 Message agent doesn't stop message buffering when the management server is available again after a network outage, fixed DNS problem or similar. This can happen when the agent restarts/the machine reboots during the network problem occurred. - SR: B555013794 The current directory should be removed from the agent's environment PATH. That way only files can be executed that are either specified with the full path or the path is explicitly specified in the PATH. - SR: B555013891 In MoM environments, opcmsga does not return action responses to SECONDARY managers, if their name is not resolvable. - SR: B555014093 opcmona may crash (UNIX) or doesn't process all SCHEDULE templates (Windows) when using SCHEDULE templates. - SR: B555014132 During a distribution the agent may report an error like: ITO responsible manager configuration. (OpC30-1203) Cannot open file \usr\OV\tmp\OpC\cfgchg. System Error Number: 13 (d) - The data is invalid. (OpC20-63) - SR: B555014215 The port should be configurable where opctrapi listens for incoming traps. - SR: B555014245 Traps of size greater than 5 Kb not handled properly by opctrapi if local trap interception is used. - SR: B555014574 opcagt -start/-stop/-status doesn't work correctly, if the currently running agent can't be reached over RPC. - SR: B555014591 When the OPC_INT_MSG_FLT is set to TRUE then the filtered message is received corrupted on the server in a Japanese environment. - SR: B555014596 Including fixes from the ECS runtime engine with PHSS_26909 and equivalent. - SR: B555014715 The Control Agent slowly grows in memory usage. - SR: B555014851 opcmsga sends the same message operation (e.g. an acknowledge request created by opcmack(1) ) again and again if the related message is not in the cache and one of the target managers can not be reached. - SR: B555014942 The opcle process loops if a logfile is removed while it is read. - SR: B555015331 The monitor agent, opcmona, may report wrong results of executed monitor scripts or programs when using many 'advanced monitors' such as OVPERF. In some of these cases opcmona might even abort. - SR: B555015626 opcif_read() doesn't return data if the signal pipe is empty, but there is still more data in the queue file. This can for example happen if the maximum pipe size of 8192 bytes was reached and therefore no more signal bytes could be written into the signal pipe. - SR: B555015758 opcmsgi aborts if one of the set attributes has an unmatched '<'. PHSS_26517: - SR: 8606222554 Certain policies in VPW do not work as expected, for example: VP_WIN-WINS-FwdAllInformation VP_WIN-WINS-FwdAllWarnError VP_WIN-DHCPCl_FwdAllInfo VP_WIN-DHCPCl_FwdAllWarnError This problem can also occur for VPO during condition matching. Matching the application and object attributes is now case sensitive. For example, a message with application "TEST" is matched but application "tEST" is unmatched. - SR: 8606227840 Variables in the template default message key are not resolved for unmatched messages. - SR: B555012210 ECS circuit using reset on unless node causes opceca to abort. - SR: B555012929 If you run opcdista from commandline, you don't get any useful messages, only the internal status letters. For supportability, it would be better to have some explicit status and error reporting. - SR: B555013371 Sometimes the new scheduled action template configuration is not loaded after a distribution. Instead, the old scheduled actions are still started. - SR: B555013435 The message agent opcmsga hangs unpredictably. This is more likely to happen on systems with a very high ICMP traffic. - SR: B555013495 In Japanese environments programs using the agent APIs can fail with errors on invalid or incompatible codesets. PHSS_25537: - SR: H555006719 If the agent is running as non-root user and the management server processes are restarted, the agent has to be restarted as well, otherwise all messages are buffered. - SR: 8606213476 The distribution to nodes may hang or fail. This is more likely to happen while distributing to Windows NT/2000 nodes rather than on UNIX nodes. - SR: B555007980 Local automatic actions are started immediately, even though agent MSI is enabled in divert mode and the 'Immediate Local Automatic Action' box is not checked. - SR: B555008220 The <$MSG_TIME_CREATED> variable is not substituted in the message template. - SR: B555008838 The event correlation engine creates a 'Time cannot go backwards' error if the system is very busy. - SR: B555009745 The template default of the object field of a monitor template is not used. - SR: B555010620 Some messages are missing in the Japanese message catalog. You get a 'Cannot generate message' error. - SR: B555010955 Even if you used opcswitchuser.sh to specify a non-root user which should run the ITO agent, it will still be started as user root after a system reboot. - SR: B555010966 A message key relation containing <*> does not always match message keys correctly. This results in messages not being acknowledged when they should. - SR: B555011184 opcagt fails to start opcctla if it is started as ./opcagt and /opt/OV/bin/OpC is not in the search PATH. - SR: B555011505 1. opcecm/opceca might run in a dead lock while processing lots of ECS annotate nodes 2. opcecm/opceca might leak memory when ECS annotate nodes are used - SR: B555011594 The original message text of a logfile encapsulator message is wrong if <$LOGPATH> or <$LOGFILE> is used. - SR: B555011638 Pattern matching cannot match the new line character of multi-line messages. - SR: B555011979 Pattern matching hangs if only single byte Japanese HANKAKU KANA characters are used. - SR: B555011990 ECS event log (ecevilg) has invalid time difference to the next message which can cause the ECS simulator to hang or appear to hang when loading an event log file with such values. - SR: B553000162 After opcagt -stop, opcagt -status tells that the control agent does not run although it is running and sometimes you get the following error in the message browser: 'Ouput of kill -0 differs from internal pids-table for index (OpC30-1094)' PHSS_24609: - SR: B555010879 opctrapi aborts during template distribution if conditions with the 'Suppress Identical Output Messages' features are used. - SR: B555010899 opcdista requests distribution data from a wrong manager if there is a secondary manager with the same short hostname than the appropriate primary manager. - SR: B555010948 Nested alternatives were not handled correctly in the pattern matching algorithm, e.g. the pattern '[a|b]c|d' was handled like '[a|b|d]c'. - SR: B555010980 Traps without a SNMP variable are not matched because server patch adds an extra attribute to the template. - SR: B555011126 Agent distribution using the new Secure Shell (SSH) method introduced with the A.06.08 server patches doesn't work for HP-UX agents. Nothing is installed but you get no error message about it. The only hint is that the "Unpacking truck file /tmp/opc_tmp/opc_pkg.Z" message is not displayed during the installation. - SR: B555011445 The VPO agent processes abort if no aCC runtime library patch is installed on the system. PHSS_23987: - SR: 8606180583 When the VPO agent was started manually from an MC/SG shared volume, the agent was killed upon package stop. This was because the agent used this volume as the current directory. Now the agent always starts in /tmp. This also has the side effect that any core file for the agent is written into /tmp. - SR: 8606180891 The template default for the service name is not used. - SR: 8606181988 The event interceptor doesn't forward on "forward unmatched" if a "supress unmatched" condition is used in a second template - SR: 8606182250 opcfwtmp doesn't trap bad login from CDE login. - SR: 8606182981 The ITO agent is not started after system reboot if the default runlevel is lower than 3 and you don't get any warning about that fact. - SR: B555010341 Agent sometimes does not start automatically after reboot while manual start works fine. PHSS_23820: - The event correlation process opceca (agent) / opcecm (server) might crash after processing several annotation nodes - The VPO A.06.03 patches for HP-UX and Solaris do not work as expected in firewall environments: While server port restrictions are still regarded, client-side port restrictions are ignored. PHSS_22880: - Changes were required for the security add-on product VantagePoint Advanced Security. - agent installation configure script fails to convert ITO 4 queue files: awk syntax error in swagent.log file PHSS_22011: - disk_mon.sh returns invalid values if the bdf command returns more than one line output for a filesystem (e.g. if the filesystem name exceeds its column width) - Several changes for firewall environments. For detailed information refer to the VPO Firewall Configuration White Paper version 3.0 - When executing large numbers of autoactions, some of them were staying in 'running' state. - opctrapi aborts after getting traps with unresolvable IP address. - The handling of '\' was different in the pattern definition and the "matching pattern". - if buffer file size limitation is enabled the agent may discard low-severity messages even if there is still space in the buffer file Defect Description: PHSS_28781: - SR: H555007666 The output of sendmail 10.75 been changed. So the regular expression need to be modified. - SR: H555008275 Signal handler for SIGIO was installed before the socket on which we receive ICMP replies was set to non-blocking mode. An unsolicited SIGIO would trigger the signal handler which would wait indefinitely on the socket for data, which would never arrive. Since NCS agent is single-threaded, all communication would stop. The fix is in setting the non-blocking mode before installing the signal handler, so it would not wait forever. - SR: H555008529 This is a timing issue, where internal structures are not updated by the signal handler in time for proper values to be written in the PIDS file. An additional check for process presence has been implemented before writing the PIDS file. - SR: H555008553 Resolution: - itochecker_agt and its configuration file itochecker_agt.conf were introduced. Checking the kernel parameters now returns the correct values. An additional opction (8) was added which gets opcerror file on management server. - SR: H555008602 When using OPC_RPC_ONLY, ICMP handling is not initialized, but the message agent will call opc_pb_ping_reset() after a successful server checkalive cycle. This causes an invalid (NULL) pointer to be dereferenced and causes a core dump. opc_pb_ping_reset() now has a check to see if ICMP handling has been initialized and if not, immediately returns from the function. - SR: 8606187183 The opcmsg interceptor restarts after a deploy/undeploy of policies/templates. During this process all the policy/template information is cleaned and read again from a temporary file. Because suppressing times are not stored in this temporary file, this times are lost. Now the suppressing times are taken over to the new data. - SR: 8606232431 VPO now ignores node names that contain only white space characters without contacting the name service. - SR: 8606242614 The variables <$LOGFILE> and <$LOGPATH> were replaced after the suppression rules were evaluated. Therefore the comparison did not use the actual logfile name or path, but compared the string "<$LOGFILE>" or "<$LOGPATH>". - SR: 8606244523 The opcle aborts when a <`script`> returns a logfile name twice. This is because of referencing and handling the same file twice. To fix this, the opcle checks whether the same logfile has been returned already and when adding to the internal list, each logfile gets added exactly once. - SR: B555008674 The man pages for opcagt and opcragt now document the new option '-version'. The message catalog was updated to show the '-version' option in the usage string of the opcagt and opcragt commands. - SR: B555013620 NNM 6.2 introduced an event option to pmd - "u". This option specifies to prefer the IP address in an SNMPv1 trap's UDP header over the contents of the SNMPv1 trap PDU's agent_addr field. A new opcinfo variable OPC_USE_UDP_AS_TRAP_SOURCE was added for opctrapi. If set to TRUE, opctrapi will use the UDP address instead of the agent_addr. - SR: B555013719 Message agent remains in buffering mode even when the management server is available again. The reason is that the agent wasn't able to resolve the management server name to an IP address at startup and the agent doesn't try again during runtime. This has been fixed by checking for a resolvable name every time a message should be buffered till the name can be resolved, after this the normal checkalive mechanism which handles buffered message takes place. - SR: B555013891 Even if the IP address of the management server was specified in the mgrconf file, it was not used except for the primary manager. This behavior was changed to give the mgrconf file precedence over name resolution. - SR: B555014093 opcmona may crash (UNIX) or doesn't process all SCHEDULE templates (Windows) when using SCHEDULE templates. This can occur when there are only spaces in one of the schedule fields (Minute, Hour, Day of the Month, Month, Year, Day of the Week). You can verify this by going to the conf/OpC directory on the node and doing an opcdcode monitor. When there are entries like WEEKDAY " " the problem can occur. Now the monitor agent treats sequences of spaces like an empty string that is a wildcard and uses all valid values in the possible range. For WEEKDAY this is 0-6. - SR: B555014132 During a distribution the agent may report error number OPC30-1203/OPC20-63 when trying to access the cfgchg file. The cause for this problem is that there are several processes trying to get exclusive access to this file at the same time. The problem has been fixed by doing a retry for 10 times in case the error should occur with a delay of one second. - SR: B555014215 Using the new opcinfo variable SNMP_TRAP_PORT opctrapi can now be configured to listen on another port than 162. This is only effective, if traps are not received through the NNM pmd. - SR: B555014574 With this changes the opcctla is now be able to deal with a running opcctla that is not reachable via RPC: opcagt -status will display a warning if the currently running opcctla is not reachable over RPC, but then it will display the status according to the pids file. opcagt -stop will also kill the unresponsive opcctla and try to start a new one. If opcctla is not reachable over RPC, opcagt -start will kill all running agent processes and then start a new opcctla which starts the agent processes. But of course, the agent won't be able to start if RPC is still not available at that time. - SR: B555014591 The defect was caused by the double conversion from the server code set to the internal code set, once on the agents side when it sent the internal message to opcmsga and once by opcmsgi when it forwarded the message again. Now, the message is converted back from the internal code set to the server code set in opcmsga before sending the message to the opcmsgi queue. The management server will get the message through opcmsga in the internal code set, and it will convert it into the server code set. The conversion is made only if the internal code set is different from the server code set. - SR: B555014851 opcmsga maintains an internal cache to find out the target managers per message ID. The cache expires after 1 hour (can be changed with the opcsvinfo variable OPC_STORE_TIME_FOR_MGR_INFO) and then it runs into a problem in its algorithm so that a message operation on a non-cached message is sent again and again until the last target manager in an internal list can be reached. - SR: B555015331 opcmona holds a central table for all subprocess related information. Advanced monitors are executed in separate threads and could access this table in parallel, thus overwriting each other's data. The table accesses are now serialized by a mutex. for SR's not listed in this section please see the list of symptoms PHSS_26517: - SR: 8606222554 The condition test for the message attributes application, object and message group are always done case sensitive, therefore a message with the application "TEST" matches but "tEST" does not match. With this patch an opcinfo flag is introduced, which allows to switch between case sensitive and case insensitive check. flag : OPC_COND_FIELD_ICASE type : boolean default: FALSE By setting this to true the policies mentioned above will work. - SR: B555012210 Linked with new ECS runtime library that contains a fix for this problem. - SR: B555012929 The opcdista communicates with the opcctla process via stdin/stdout so if you run it from commandline, you only see the status letters but don't know what they mean. The new '-v' option prints more output, e.g.: $ ./opcdista -v 0 - No distribution data available. - SR: B555013435 One thread tried to read from a socket while another thread closed it. This could happen due to missing locking of global data. This data is now guarded by a mutex. - SR: B555013495 When tracing was added to the API functions a necessary NLS initialisation was not done. This problem was introduced only by the A.06.10 patches for HPUX. for SR's not listed in this section please see the list of symptoms PHSS_25537: - SR: H555006719 When a communication to the message receiver fails, the agent starts buffering messages. It periodically checks if the server is alive by sending it ICMP packets. If the server can't be reached with ICMP packets, no RPC communication is attempted. This doesn't work when the agent is running as non-root (only root is allowed to send ICMP packets); the sending function returns an OK value but does not send anything. Therefore we also never receive any replies and the message agent never goes out of the "Checking node" mode. Fix: If the agent is running as a non-root user, opcmsga immediately tries to contact the management server using RPC communication. - SR: 8606213476 While the agent receives several RPC calls, like "Start Distribution", "Execute Action" or "Set Primary Manager" in parallel, it may happen that the call results in a conflict within the control agent, which causes the NT control agent to bring a Dr. Watson window. This conflict can also occur on UNIX but the control agent does not die, rather the RPC request may fail. With this version the RPC calls which could cause conflicts are serialized. - SR: B555010955 The non-root user was added to the startup configuration file but not used. - SR: B555010966 The processing of the key relation is wrong for the log file encapsulator. The problem is that all unresolved entries followed by a resolved entry are removed. Other unresolved entries are kept as they are. - SR: B555011184 The working directory for the ITO agent was changed from /opt/OV/bin/OpC to /tmp to avoid problems if the agent is running in an MC/SG environment. - SR: B555011638 VPO could not match for for new line of multi line messages. Following changes have been made to allow this: It is now possible to use ^M (\r) as field separator. New patterns are introduced: to match any number of line breaks (UNIX style \n or NT style \r\n) and to match exactly n line breaks, for example <1/> will match exactly one line break. This change works only for sources that already can create multi line messages (for example opcmsg or NT event log), it does not allow multi line logfile encapsulation. This change requires a fix on the management server and the agent. Therefore a patch on the management server and a patch for the agent is required to use the new functionality. for SR's not listed in this section please see the list of symptoms PHSS_24609: - SR: B555010879 When freeing the allocated memory, a wrong frunction was used. - SR: B555010899 opcdista requests distribution data from a wrong manager if there is a secondary manager with the same short hostname than the appropriate primary manager because it searches the whole list and for each name it tries to match first the long then the short name. Instead it should try the long names for all systems first and only then try to match using the short names. - SR: B555010948 The grammar was changed to allow nested alternatives and process it correctly. - SR: B555011126 The SSH agent installation method is not known to the opcrinst script which should unpack the agent on the target not. Thus the opcrinst script simply does nothing. - SR: B555011445 The VPO agent does no longer require the aCC runtime library. for SR's not listed in this section please see the list of symptoms PHSS_23987: - SR: 8606182250 opcfwtmp didn't handle the LOGIN_PROCESS value of the wtmprec.ut_type field of the WTMP structure, so the bad logins from CDE haven't been detected. - SR: 8606182981 The ITO agent was integrated into the systems startup process at runlevel 3 but up we didn't check the default runlevel from /etc/inittab. Now there is a check and you'll get a warning if the default runlevel is lower than 3. - SR: B555010341 When the process ID of the 'opcctla -start' is the same as of the running opcctla before the shutdown, the internal logic concluded that the agent is already running and did not start up the subprocesses. for all other defects not listed in this section please see the list of symptoms PHSS_23820: see the list of symptoms PHSS_22880: see the list of symptoms PHSS_22011: see the list of symptoms Enhancement: No SR: H555008602 H555008553 H555008529 H555008275 H555007666 H555006719 B555015758 B555015626 B555015331 B555014942 B555014851 B555014715 B555014596 B555014591 B555014574 B555014245 B555014215 B555014132 B555014093 B555013891 B555013794 B555013719 B555013620 B555013495 B555013435 B555013371 B555012929 B555012210 B555011990 B555011979 B555011638 B555011594 B555011505 B555011445 B555011184 B555011126 B555010980 B555010966 B555010955 B555010948 B555010899 B555010879 B555010620 B555010341 B555010079 B555009745 B555009155 B555009152 B555008838 B555008674 B555008613 B555008314 B555008220 B555007980 B555007752 B555007709 B555007602 B555007426 B555006890 B555006267 B553000162 8606262299 8606244523 8606242614 8606233602 8606232431 8606227840 8606222554 8606213476 8606189243 8606187183 8606182981 8606182250 8606181988 8606180891 8606180583 8606137088 Patch Files: OVOPC-CLT.OVOPC-UX10-CLT,fr=A.06.00,fa=HP-UX_B.11.00_32/64, v=HP: /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/opc_pkg.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrclchk /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrdschk /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrndchk /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcroschk /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrverchk /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrinst /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ana_disk.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/cpu_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/disk_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/last_logs.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/mailq_l.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/proc_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/swap_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/sh_procs.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/vp_chk.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/dist_mon.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/mondbfile.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ssp_chk.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/opcfwtmp.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/opcnprcs.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ opc_get_ems_resource.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/mailq_pr.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/st_inetd.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/st_syslogd.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/st_mail.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/dist_del.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opcdf.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opclpst.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opcps.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/E10000Log.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ssp_config.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opc_sec_v.sh.Z /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/AgentPlatform /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s800/ hp-ux10/AgentPlatform what(1) Output: OVOPC-CLT.OVOPC-UX10-CLT,fr=A.06.00,fa=HP-UX_B.11.00_32/64, v=HP: /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/opc_pkg.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrclchk: HP OpenView VantagePoint A.06.15 (07/17/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrdschk: HP OpenView VantagePoint A.06.15 (07/17/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrndchk: HP OpenView VantagePoint A.06.15 (07/17/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcroschk: HP OpenView VantagePoint A.06.15 (07/17/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrverchk: HP OpenView VantagePoint A.06.15 (07/17/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/install/opcrinst: HP OpenView VantagePoint A.06.15 (07/17/03) /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ana_disk.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/cpu_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/disk_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/last_logs.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/mailq_l.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/proc_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/swap_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/sh_procs.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/vp_chk.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/dist_mon.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/mondbfile.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ssp_chk.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/opcfwtmp.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/opcnprcs.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ opc_get_ems_resource.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/mailq_pr.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/st_inetd.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/st_syslogd.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/st_mail.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/actions/dist_del.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opcdf.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opclpst.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opcps.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/E10000Log.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ssp_config.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/A.06.15/RPC_DCE_TCP/cmds/opc_sec_v.sh.Z: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s700/ hp-ux10/AgentPlatform: None /var/opt/OV/share/databases/OpC/mgd_node/vendor/hp/s800/ hp-ux10/AgentPlatform: None cksum(1) Output: OVOPC-CLT.OVOPC-UX10-CLT,fr=A.06.00,fa=HP-UX_B.11.00_32/64, v=HP: 4133246717 7592813 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/opc_pkg.Z 817617292 6898 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/install/ opcrclchk 2252717812 28923 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/install/ opcrdschk 1013147263 6734 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/install/ opcrndchk 4238628260 6287 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/install/ opcroschk 2496525218 32431 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/install/ opcrverchk 2147588013 105959 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/install/ opcrinst 3903392991 2731 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ ana_disk.sh.Z 198219211 5979 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ cpu_mon.sh.Z 2390618391 6133 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ disk_mon.sh.Z 2949826462 5849 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ last_logs.sh.Z 688415157 5844 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ mailq_l.sh.Z 2139831115 6008 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ proc_mon.sh.Z 621198497 5879 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ swap_mon.sh.Z 1234321595 5437 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ sh_procs.sh.Z 1508771624 5736 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ vp_chk.sh.Z 92338108 6126 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ dist_mon.sh.Z 498790562 14374 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ mondbfile.sh.Z 3473433661 5980 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ ssp_chk.sh.Z 2454815910 12529 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ opcfwtmp.Z 3706607476 9344 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ opcnprcs.Z 3715416158 17774 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/monitor/ opc_get_ems_resource.Z 2509433982 2537 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/actions/ mailq_pr.sh.Z 2940889584 2581 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/actions/ st_inetd.sh.Z 4279336261 2590 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/actions/ st_syslogd.sh.Z 1944633495 2582 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/actions/ st_mail.sh.Z 2335390130 6112 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/actions/ dist_del.sh.Z 1402239622 325 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ opcdf.Z 2637984484 386 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ opclpst.Z 3458075581 401 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ opcps.Z 1802935583 3322 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ E10000Log.sh.Z 3235221589 3109 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ ssp_config.sh.Z 576971563 13159 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/A.06.15/RPC_DCE_TCP/cmds/ opc_sec_v.sh.Z 3752902495 6319 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s700/hp-ux10/AgentPlatform 1624156908 6317 /var/opt/OV/share/databases/OpC/mgd_node/ vendor/hp/s800/hp-ux10/AgentPlatform Patch Conflicts: None Patch Dependencies: None Hardware Dependencies: None Other Dependencies: None Supersedes: PHSS_26517 PHSS_25537 PHSS_24609 PHSS_23987 PHSS_23820 PHSS_22880 PHSS_22011 Equivalent Patches: None Patch Package Size: 7390 KBytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHSS_28781 5. Run swinstall to install the patch: swinstall -x autoreboot=true -x patch_match_target=true \ -s /tmp/PHSS_28781.depot By default swinstall will archive the original software in /var/adm/sw/save/PHSS_28781. If you do not wish to retain a copy of the original software, include the patch_save_files option in the swinstall command above: -x patch_save_files=false WARNING: If patch_save_files is false when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. For future reference, the contents of the PHSS_28781.text file is available in the product readme: swlist -l product -a readme -d @ /tmp/PHSS_28781.depot To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHSS_28781.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: (A) Patch Installation Instructions ------------------------------- (A1) Install the patch, following the standard installation instructions provided above under "Installation Instructions". Observe that you can use opc_backup(5) for backing up your system before installing a patch. NOTE: Make sure that no agent of the platform addressed by this patch is distributed (either from the ITO Administrator's GUI or from command line using inst.sh) while running swremove. If you are running VPO in a MC/ServiceGuard installation: - Note, that only files on the shared disk volume at /var/opt/OV/share will be patched. Therefore install the patch on one cluster node while the shared disks are mounted. The server processes may be running during patch installation. - It is not necessary to install this patch on all cluster nodes. Even if the software inventory on the other cluster nodes will not be updated, the patched files will be available there when the shared disk is switched to them. NOTE: This patch must be installed on the VPO Management Server system, NOT on an VPO Managed Node directly. Changes will take effect on managed nodes by means of VPO Software Distribution (using 'Force Update' if there is already an agent installed on the managed node). See chapter 2 of the VPO Administrator's Reference manual for more information. Be aware that if you redistribute agent software to MC/ServiceGuard cluster nodes you have to update the agent registration on these nodes like this: (this is also documented in the Installation Guide) # INSTDIR=/opt/OV/bin/OpC/install # cp /opt/OV/newconfig/OpC/$INSTDIR/itoagt_sg.reg \ $INSTDIR/itoagt.reg # opcagtreg -add $INSTDIR/itoagt.reg (B) Patch Deinstallation Instructions --------------------------------- NOTE: Make sure that no agent of the platform addressed by this patch is distributed (either from the ITO Administrator's GUI or from command line using inst.sh) while running swremove. If you are running VPO in a MC/ServiceGuard installation make sure to mount the shared disks at the node and only at the node that had them mounted during patch installation. Otherwise restoration of the original files onto the shared disk will fail.