Patch Name: PHKL_25054 Patch Description: s700_800 11.04 (VVOS) VxFS 3.1 cumulative patch Creation Date: 01/10/07 Post Date: 01/10/26 Hardware Platforms - OS Releases: s700: 11.04 s800: 11.04 Products: N/A Filesets: JournalFS.VXFS-PRG,fr=B.11.04,fa=HP-UX_B.11.04_32/64,v=HP JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_32,v=HP JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_64,v=HP Automatic Reboot?: Yes Status: General Release Critical: Yes PHKL_25054: PANIC CORRUPTION Based on HP-UX Patch PHKL_24027: PANIC CORRUPTION Based on HP-UX Patch PHKL_24012: PANIC PHKL_22561: PANIC CORRUPTION HANG Based on HP-UX Patch PHKL_22432: PANIC CORRUPTION Based on HP-UX Patch PHKL_21392: PANIC CORRUPTION PHKL_20911: PANIC HANG Based on HP-UX Patch PHKL_20674: PANIC Based on HP-UX Patch PHKL_19800: HANG PHKL_19441: HANG Based on HP-UX Patch PHKL_19169: HANG Category Tags: defect_repair enhancement general_release critical panic halts_system corruption Path Name: /hp-ux_patches/s700_800/11.X/PHKL_25054 Symptoms: PHKL_25054: Ported HP-UX patch PHKL_24027 to VVOS Based on HP-UX patch PHKL_24027: ( SR: 8606113817 CR: JAGac12337 ) ftruncate(2)/truncate(2) on memory mapped VxFS files may invalidate the partial page containing eof, which may contain some valid data causing data corruption. ( SR: 8606144927 CR: JAGad14267 ) When quota is enabled on a VxFS filesystem, chown(1m) may fail with EINVAL for uids between 2560 and 9983 and for uids 15104 or higher. ( SR: 8606177859 CR: JAGad47086 ) repquota(1m) command may show incorrect quota usage for some users on a quota enabled VxFS filesystem. ( SR: 8606183708 CR: JAGad52921 ) Data Page Fault while using Hyperfabric network. Stack of the panic thread may look like, panic+0x14 report_trap_or_int_and_panic+0x84 interrupt+0x1d4 $ihndlr_rtn+0x0 sendfile_rele+0x304 freeb_pullupmsg+0x238 freeb+0x7b4 CLIC_SEND+0x1ecc clicdlpi_wput+0x140 putnext+0xcc ip_wput_ire+0x454 ip_wput+0x470 putnext+0xcc tcp_timer+0x334 tcp_wput+0x828 puthere+0x148 mi_timeout_exec+0x294 sw_service+0xb0 mp_ext_interrupt+0x150 ivti_patch_to_nop3+0x0 idle+0x81c Based on HP-UX patch PHKL_24012: ( SR: 8606180062 DTS: JAGad49284 ) Data Page Fault in vx_rwsleep_unlock()/vx_igunlock() invoked from vx_iflush() or vx_fsflushi(). This happens under heavy stress to VxFS filesystems. Stack of the panic thread may look like: panic+0x10 report_trap_or_int_and_panic+0xe8 trap+0xa48 thandler+0xb7c vx_rwsleep_unlock+0xc vx_igunlock+0x14 vx_iflush+0x1fc vx_iflush_thread+0x70 vx_startdaemon+0xac vx_postinit+0x74 vx_sync+0x14 update+0x6c sync+0x20 PHKL_22561: Ported HP-UX patch PHKL_22432 to VVOS Based on HP-UX patch PHKL_22432: ( SR: 8606156750 DTS: JAGad26084 ) Data page fault panic in vx_rwsleep_trylock(). An example stack trace may include the following: vx_rwsleep_trylock vx_irwlock_try vx_iflush Since the panic could occur in any VxFS ('vx_') flushing routines, there could be other possible stack traces that could be seen as well. This failure occurs when there are many threads reading and writing to the VxFS filesystem . ( SR: 8606140628 DTS: JAGad09987 ) Processes accessing files on VxFS file systems may hang, and only be cleared if the system is rebooted. Based on HP-UX patch PHKL_21392: ( SR: 8606105425 DTS: JAGab73477 ) Data page fault panic. Due to the nature of the fault, a panic could occur in any VxFS ('vx_') routine. There is no specific stack trace for reference. This failure occurs during heavy VxFS filesystem use. This problem was introduced by PHKL_17205. PHKL_21225: Ported HP-UX patch PHKL_20079 to VVOS Based on HP-UX patch PHKL_20079: ( SR: 8606104878 DTS: JAGab72639 ) VxFS performance may be poor on large systems, due to lock contention between processors. Based on HP-UX patch PHKL_19942: ( SR: 8606104878 DTS: JAGab72639 ) VxFS performance may be poor on large systems, if directories with many files in them are accessed concurrently by several processes. PHKL_19441: Ported HP-UX patch PHKL_19169 to VVOS Based on HP-UX patch PHKL_19169: ( SR: 8606101101 DTS: JAGab21116 ) Processes accessing files on VxFS file systems may hang, and only be cleared if the system is rebooted. This problem was introduced with patch PHKL_18534. PHKL_20911: Ported HP-UX patch PHKL_20674 to VVOS Based on HP-UX patch PHKL_20674: (SR: 8606113482 DTS : JAGac00113) sync command may panic in vx_rwsleep_trylock/vx_fsflushi(). Stack of panic thread is given below. panic+0x14 report_trap_or_int_and_panic+0x80 trap+0xa8c nokgdb+0x8 vx_rwsleep_trylock+0x14 vx_irwlock_try+0x18 vx_fsflushi+0x37c vx_sync+0x100 update+0x48 sync+0x24 syscall+0x480 $syscallrtn+0x0 Based on HP-UX patch PHKL_19800: 1. PHKL_18531 may hang a uniprocessor system while unmounting a VxFS if an inode is locked by some other process. 2. VxFS may hang if more than one process writes to a memory mapped file. PHKL_19989: Large number of sequential writes to a regular file takes longer on VxFS when compared with HFS. Based on HP-UX patch PHKL_20401: (SR: 4701415679 DTS: JAGaa93188 ) quotaon does not work. It fails with I/O error. # quotaon -v /home quotactl: /dev/vg02/lvol1: I/O error Defect Description: PHKL_25054: Ported HP-UX patch PHKL_24027 to VVOS Based on HP-UX patch PHKL_24027: ( SR: 8606113817 CR: JAGac12337 ) While truncating a memory mapped VxFS file, VxFS invalidates the partial page containing eof causing data corruption. Resolution: Don't invalidate the partial page containing eof while truncating a memory mapped file. ( SR: 8606144927 CR: JAGad14267 ) For uids between 2560 and 9983 and for uids 15104 or higher, chown(1m) needs to extend the quota file which fails since incorrect block size is used. Resolution: Make sure correct block size is used while extending the quota file. ( SR: 8606177859 CR: JAGad47086 ) Under some extreme corner cases a file's link to the user quota structure is removed and thereafter allocation/deallocation of blocks to the file are not accounted in the quota. This may result in incorrect quota calculation for users and sometimes this may cause charging the users for heavy usage of the filesystem. Resolution: Make sure the all files have proper links to user quota structure always. ( SR:8606183708 CR:JAGad52921 ) VxFS may free a vnode while a buffer associated with that vnode is in use in sendfile(2). Later when the sendfile code accesses the vnode through the buffer, the system panics. Resolution: Set up a dummy vnode which is not freed and use that vnode for buffers passed to sendfile(2) so that sendfile(2) code will always be accessing a valid vnode. Based on HP-UX patch PHKL_24012: ( SR: 8606180062 DTS: JAGad49284 ) The system panics while unlocking a lock on an inode. The lock was freed by a thread reusing the inode while another thread had locked it for flushing the inode. Later when the flushing thread tries to unlock the freed lock a system panic results. Resolution: Keep a hold on the vnode while flushing the inode so that the inode is not reused while being flushed. PHKL_22561: Ported HP-UX patch PHKL_22432 to VVOS Based on HP-UX patch PHKL_22432: ( SR: 8606156750 DTS: JAGad26084 ) VxFS inodes were not cleaned up properly while deleting the files. This may lead to have inodes in inconsistent state in the inode cache, and later system may panic while processing them. Also a flag on the inode to indicate that the inode is in reuse, was reset without having the proper lock, leading to races with other functions processing the inode. Resolution: Clean up inodes when files are deleted. Also reset the above flag on the inode while holding the required lock. ( SR: 8606140628 DTS: JAGad09987 ) The problem is a deadlock created between 2 VxFS threads when accessing the same inode. One thread will have incremented the soft hold count on the inode, and wait for the inode lock, while the other thread will have the inode lock waiting for the soft hold count of the inode to go to zero. Resolution: The fix is for the thread that is holding the soft hold count to give up the soft hold if it detects inode lock is held by some other thread. Based on HP-UX patch PHKL_21392: ( SR: 8606105425 DTS: JAGab73477 ) vx_real_iget() was holding the vnode without having the inode lock in the "found" path, causing race conditions with several other VxFS routines, which in turn resulted in data page fault panics. Resolution: Get inode lock before holding the vnode in vx_real_iget(). PHKL_21225: Ported HP-UX patch PHKL_20079 to VVOS Based on HP-UX patch PHKL_20079: ( SR: 8606104878 DTS: JAGab72639 ) The number of spin locks was insufficient for systems with several processors. VxFS performance is very poor with the current 32 spin locks for inode operations. Resolution: Increase the number of spin locks to 256. Based on HP-UX patch PHKL_19942: ( SR: 8606104878 DTS: JAGab72639 ) vx_iget always locks the ibmap exclusively and sets the ibmap. This is not needed if the inode has the ibmap correctly. This unnecessary locking affects performance. Resolution: Lock the ibmap only if necessary to set the ibmapops. PHKL_19441: Ported HP-UX patch PHKL_19169 to VVOS Based on HP-UX patch PHKL_19169: ( SR: 8606101101 DTS: JAGab21116 ) The problem is a deadlock created between 2 VxFS processes when accessing the same inode. One process will have incremented the soft hold count on the inode, and wait for the inode lock, while the other process will have the inode lock waiting for the soft hold count of the inode to go to zero. Resolution: The fix is for the process that is holding the inode lock to give up the lock if it detects a non-zero soft hold count in the inode after a limited number of retries. PHKL_20911: Ported HP-UX patch PHKL_20674 to VVOS Based on HP-UX patch PHKL_20674: (SR: 8606113482 DTS : JAGac00113) vx_ireuse_clean() was assuming the vx_inodes on the free lists will have i_max_lwrid = 0. So when an inode is stolen from the free lists, fields i_fs, i_slocks... are reset, but not i_max_lwrid. This may lead to a Data Page Fault in vx_fsflushi(). This problem was introduced by PHKL_17205. Resolution: vx_inactive_tran() should make sure that i_max_lwrid = 0, before putting the vx_inode to the free lists. Based on HP-UX patch PHKL_19800: 1. While unmounting a VxFS, the unmount will lock all the active inodes in the file system, in an infinite loop. This may hang a uniprocessor, if the inode is locked by somebody else. This is introduced by the patch PHKL_18531. Resolution: release the processor before retrying to lock the inode, while unmounting the FS. 2. If more than one process write to a memory mapped file, a deadlock may occur between inode locks and buffer cache, because of the incorrect ordering of the locks. To reproduce the problem, map a file to memory and keep writing to the pages. Start two other processes which write to this file. VxFS hangs. Resoltion: Changed the ordering of locks, so that the deadlock condition is avoided. PHKL_19989: Redundant code in routine vx_write1() is resulting in system zeroing out privilege vectors and calling vx_iupdat() with every write() system call. Resolution: Clear privilege vectors and update the inode only if privileges exist on the file. Once the privileges are cleared, subsequent writes to the same file will not zero out privilege vectors and not call vx_iupdat(). Based on HP-UX patch PHKL_20401: (SR: 4701415679 DTS: JAGaa93188 ) Negative user IDs were not taken care of while processing the quotas. For example the UID of nobody is -2. This was failing a sanity check in the quotaon code path. Resolution: Cast UIDs to unsigned in sanity checks in various paths to take care of negative UIDs. SR: 4701415679 8606101101 8606103794 8606104878 8606105015 8606105425 8606113482 8606113817 8606144927 8606177859 8606183708 8606140628 8606156750 8606180062 8606105681 Patch Files: JournalFS.VXFS-PRG,fr=B.11.04,fa=HP-UX_B.11.04_32/64,v=HP: /usr/include/sys/fs/vx_bsdquota.h /usr/include/sys/fs/vx_port.h JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_32,v=HP: /usr/conf/lib/libvxfs_base.a(vx_bsdquota.o) /usr/conf/lib/libvxfs_base.a(vx_iflush.o) /usr/conf/lib/libvxfs_base.a(vx_inode.o) /usr/conf/lib/libvxfs_base.a(vx_rdwri.o) /usr/conf/lib/libvxfs_base.a(vx_vnops.o) JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_64,v=HP: /usr/conf/lib/libvxfs_base.a(vx_bsdquota.o) /usr/conf/lib/libvxfs_base.a(vx_iflush.o) /usr/conf/lib/libvxfs_base.a(vx_inode.o) /usr/conf/lib/libvxfs_base.a(vx_rdwri.o) /usr/conf/lib/libvxfs_base.a(vx_vnops.o) what(1) Output: JournalFS.VXFS-PRG,fr=B.11.04,fa=HP-UX_B.11.04_32/64,v=HP: /usr/include/sys/fs/vx_bsdquota.h: vx_bsdquota.h $Date: 1999/11/04 07:20:57 $Revision: r11ros/1 PATCH_11.00 * (PHKL_20401) vx_bsdquota.h: $Revision: 1.5.105.3 $ $Date: 97/03/0 6 14:22:50 $ src/kernel/vxfs/vx_bsdquota.h 2.6 12 Mar 1996 03:28: 15 - */ fshp:src/kernel/vxfs/vx_bsdquota.h 2.6 /usr/include/sys/fs/vx_port.h: vx_port.h $Date: 1999/11/04 07:18:49 $Revision: r11r os/2 PATCH_11.00 (PHKL_20401) vx_port.h: $Revision: 1.5.106.3 $ $Date: 97/08/25 17 :03:17 $ src/kernel/vxfs/vx_port.h 2.28.7.6 17 Jul 1997 17:42 :52 - */ fshp:src/kernel/vxfs/vx_port.h 2.28.7.6 JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_32,v=HP: /usr/conf/lib/libvxfs_base.a(vx_bsdquota.o): vx_bsdquota.c $Date: 2001/05/07 13:50:24 $Revision: r11ros/3 PATCH_11.00 (PHKL_24027) /usr/conf/lib/libvxfs_base.a(vx_iflush.o): vx_iflush.c $Date: 2001/04/25 11:31:43 $Revision: r1 1ros/13 PATCH_11.00 (PHKL_24012) /usr/conf/lib/libvxfs_base.a(vx_inode.o): $Source: kern/vxfs/vx_inode.c, hpuxsysvx, vvos_rose, rose0250 $ $Date: 01/10/05 08:20:04 $ $Revi sion: 1.25 PATCH_11.04 (PHKL_25054) $ /usr/conf/lib/libvxfs_base.a(vx_rdwri.o): $Source: kern/vxfs/vx_rdwri.c, hpuxsysvx, vvos_rose, rose0250 $ $Date: 01/10/05 08:21:06 $ $Revi sion: 1.19 PATCH_11.04 (PHKL_25054) $ /usr/conf/lib/libvxfs_base.a(vx_vnops.o): $Source: kern/vxfs/vx_vnops.c, hpuxsysvx, vvos_rose, rose0250 $ $Date: 01/10/05 08:21:49 $ $Revi sion: 1.31 PATCH_11.04 (PHKL_25054) $ JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_64,v=HP: /usr/conf/lib/libvxfs_base.a(vx_bsdquota.o): vx_bsdquota.c $Date: 2001/05/07 13:50:24 $Revision: r11ros/3 PATCH_11.00 (PHKL_24027) /usr/conf/lib/libvxfs_base.a(vx_iflush.o): vx_iflush.c $Date: 2001/04/25 11:31:43 $Revision: r1 1ros/13 PATCH_11.00 (PHKL_24012) /usr/conf/lib/libvxfs_base.a(vx_inode.o): $Source: kern/vxfs/vx_inode.c, hpuxsysvx, vvos_rose, rose0250 $ $Date: 01/10/05 08:20:04 $ $Revi sion: 1.25 PATCH_11.04 (PHKL_25054) $ /usr/conf/lib/libvxfs_base.a(vx_rdwri.o): $Source: kern/vxfs/vx_rdwri.c, hpuxsysvx, vvos_rose, rose0250 $ $Date: 01/10/05 08:21:06 $ $Revi sion: 1.19 PATCH_11.04 (PHKL_25054) $ /usr/conf/lib/libvxfs_base.a(vx_vnops.o): $Source: kern/vxfs/vx_vnops.c, hpuxsysvx, vvos_rose, rose0250 $ $Date: 01/10/05 08:21:49 $ $Revi sion: 1.31 PATCH_11.04 (PHKL_25054) $ cksum(1) Output: JournalFS.VXFS-PRG,fr=B.11.04,fa=HP-UX_B.11.04_32/64,v=HP: 1597616715 9802 /usr/include/sys/fs/vx_bsdquota.h 678860475 16256 /usr/include/sys/fs/vx_port.h JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_32,v=HP: 1583160915 31192 /usr/conf/lib/libvxfs_base.a(vx_bsdquota.o) 2791890831 32908 /usr/conf/lib/libvxfs_base.a(vx_iflush.o) 695774865 50044 /usr/conf/lib/libvxfs_base.a(vx_inode.o) 89794469 36792 /usr/conf/lib/libvxfs_base.a(vx_rdwri.o) 3182930928 31300 /usr/conf/lib/libvxfs_base.a(vx_vnops.o) JournalFS.VXFS-BASE-KRN,fr=B.11.04,fa=HP-UX_B.11.04_64,v=HP: 870772247 66824 /usr/conf/lib/libvxfs_base.a(vx_bsdquota.o) 4147897057 77552 /usr/conf/lib/libvxfs_base.a(vx_iflush.o) 2382098994 118832 /usr/conf/lib/libvxfs_base.a(vx_inode.o) 361428793 58496 /usr/conf/lib/libvxfs_base.a(vx_rdwri.o) 2227160423 62872 /usr/conf/lib/libvxfs_base.a(vx_vnops.o) Patch Conflicts: None Patch Dependencies: s700: 11.04: PHKL_19142 s800: 11.04: PHKL_19142 Hardware Dependencies: None Other Dependencies: None Supersedes: PHKL_19989 PHKL_22561 PHKL_20911 PHKL_21225 PHKL_19441 Equivalent Patches: PHKL_24027: s700: 11.00 s800: 11.00 Patch Package Size: 640 KBytes Installation Instructions: Please review all instructions and the Hewlett-Packard SupportLine User Guide or your Hewlett-Packard support terms and conditions for precautions, scope of license, restrictions, and, limitation of liability and warranties, before installing this patch. ------------------------------------------------------------ 1. Back up your system before installing a patch. 2. Login as root. 3. Copy the patch to the /tmp directory. 4. Move to the /tmp directory and unshar the patch: cd /tmp sh PHKL_25054 5. Run swinstall to install the patch: swinstall -x autoreboot=true -x patch_match_target=true \ -s /tmp/PHKL_25054.depot By default swinstall will archive the original software in /var/adm/sw/save/PHKL_25054. If you do not wish to retain a copy of the original software, use the patch_save_files option: swinstall -x autoreboot=true -x patch_match_target=true \ -x patch_save_files=false -s /tmp/PHKL_25054.depot WARNING: If patch_save_files is false when a patch is installed, the patch cannot be deinstalled. Please be careful when using this feature. For future reference, the contents of the PHKL_25054.text file is available in the product readme: swlist -l product -a readme -d @ /tmp/PHKL_25054.depot To put this patch on a magnetic tape and install from the tape drive, use the command: dd if=/tmp/PHKL_25054.depot of=/dev/rmt/0m bs=2k Special Installation Instructions: This patch depends on base patch PHKL_19142 For successful installation please insure that PHKL_19142 is already installed, or that PHKL_19142 is included in the same depot with this patch and PHKL_19142 is selected for installation.