DBA Shutdown database
Unix Configure support for large pages by setting the following kernel tunable parameters
value to set lgpg_size = 16777216
value to set lgpg_regions = 385
value to sev_pinshm = 1
vmo -p -o v_pinshm=1
vmo -p -o lgpg_regions=385 (DBA to provide this value of 385)
vmo -p -o lgpg_size=16777216 (16MB page size)
Unix chuser capabilities=CAP_BYPASS_RAC_VMM,CAP_PROPAGATE <user_id>
Unix run bosboot –a –d /dev/hdiskxx for each hdisk that has hd5 on it.
otherwise the next reboot will take longer
Unix Bounce server
DBA Set the env variable "ORACLE_SGA_PGSZ" to 16m in profile file of oracle user
DBA setup lock_sga=true in init.ora
setup sga_target=6G in init.ora for <user_id>
DBA Startup database
Unix Verify that large pages are being used by oracle
ps -ef | grep _pmon
svmon -P <PID> | head
DBA Verify that lock_sga is true
Please
keep in mind that once large pages are setup, if they are not used by Oracle,
then they cannot be used for anything else resulting in excessive paging.
We need
to either pursue enabling large pages aggressively until successfully enabled,
or revert back to non-large page use until such time as resources can be
brought to bear to successfully pursue this option. Otherwise the system is
essentially useless.
Performance issues with Oracle/GPFS:
Let me know if you need some help.
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/large_page_ovw.htm
Performace degradation is expected when convertin from JFS2 (buffered filesystem) to GPFS (non-buffered filesystem) for the Oracle.
Before upgrade all non-kernel or application memory was utilized for filesystem cache (see MEMNEW tab in nmon reports). This cache was effectively preventing a large percentage of physical read I/O requests initiated by Oracle from being serviced by the I/O subsystem as it was already in the filesystem buffer cache. When the change was made to GPFS 64GB+ of buffer cache was removed. Which resulted all read I/O requests issue by Oracle are converted into physical I/O requests that must be serviced by the SAN. This is also confirmed by the marked increase in physical I/O being reported (see I/O sec in DISKSUMM tab in nmon reports).
Before upgrade all non-kernel or application memory was utilized for filesystem cache (see MEMNEW tab in nmon reports). This cache was effectively preventing a large percentage of physical read I/O requests initiated by Oracle from being serviced by the I/O subsystem as it was already in the filesystem buffer cache. When the change was made to GPFS 64GB+ of buffer cache was removed. Which resulted all read I/O requests issue by Oracle are converted into physical I/O requests that must be serviced by the SAN. This is also confirmed by the marked increase in physical I/O being reported (see I/O sec in DISKSUMM tab in nmon reports).
Solution:
1) Revert back to JFS2 from GPFS until required
2) Drastically increase size of database buffer cache in
SGA (80GB (50% of total memory, could be easily added to SGA based on nmon report). This is highly recommended .
If option 2 is selected performance will still not go back to what is was before converting due to how blocks in a FTS scan are read/buffered in the database
buffer cache The behavior is highly version specific. So although the impact
will be minimal in the current database version (10.2.0.4), it will likely
degrade again when upgrading to 11g due to a change in how multi-block are
read/cached.
Let me know if you need some help.
http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp?topic=/com.ibm.aix.prftungd/doc/prftungd/large_page_ovw.htm
No comments:
Post a Comment