On the Exadata X2 the following are
the four valid memory configurations. Please note on each database compute node
motherboard has 18 memory sockets or slots for the Memory DIMMS.
1. (12 x 8GB DIMMS, as shipped from
Oracle) - 96GB
2. (12 x 16GB DIMMS leaving the 6
plastic inserts in which is simply replacing the previous 12x 8GB DIMMS to 16GB
DIMMS) - 192GB
3. (12 x 8 GB and 6 x 16GB) - 192GB
4. (18 x 16GB DIMMS) - 288GB, The
Maximum for Exadata X2 compute node.
I came across a client that did not
have a supported memory configuration in their Exadata X2-2.
They had (6 x 8GB DIMMS, 12 x 16GB
DIMMS) => 240GB, which is not supported by Oracle on the Exadata X2-2
hardware as a valid memory combination.
That configuration may lead to
system performance and may not achieve your full Exadata operational ability,
and furthermore there is a potential, which may lead to a system kernel bug.
The steps provided here were used to
put the correct support memory configuration of 192GB.
These changes can be staged prior
just before adding the memory to the each of the database compute nodes. Once
the parameters are staged then the memory can be added by Oracle field support
and then the nodes can be rebooted for the changes to take effect.
As per note Oracle Support note:
How to Increase Database Allocated Memory on Exadata Database Server
(Doc ID 1346747.1)
The Best Practice for Exadata is to
use hugepages. The formula used to set
the huge pages is usually 50-75% of the total physical memory on the server.
You can allocate additional memory to the SGA buffer cache. Do not exceed 50% of RAM if you expect a
large connection load, or large PGA usage.
Perhaps up to 75% of RAM if you monitor PGA usage and connection load
very closely. The memory allocated to huge pages
is always pinned and is never paged or swapped out.
In our setup we will set the
hugepages to be 60% of the total physical memory.
Please note an OS page size on Linux
is 2M or 2048KB.
· 60% of 192GB is 115.2GB
· Convert 115.2GB to KB => 115.2 * 1024 to get # in MB
(117964.8) then again multiply by 1024 to get value in KB 120795955.2
· Divide 120795955.2 KB by the Linux pagesize which is 2048KB,
the value will be 58982 pages but we round down to the nearest whole number to
set the number of hugepages.
1. Save backup copy of the
/etc/sysctl.conf file then update the kernel file /etc/sysctl.conf , set vm.nr_hugepages = 58982
2. Set memlock kernel parameter in KB
this value should be less then the total physical memory on the server. This
value represents the maximum locked in memory space on the OS. In our example
we set memlock to about 191GB. Save a backup copy of the
/etc/security/limits.conf then update the file.
/etc/security/limits.conf
oracle
soft memlock 200278016
oracle
hard memlock 200278016
3. Set
shmall memory pages to be less than the total physical memory and larger than
the sum of all of SGAs. On the server in our setup the sum of all SGAs will be
115.2GB or less so we will use 115.2GB as our number to calculate shmall.
Convert 115.2GB to Bytes (123695058124.8)
Divide by the 4KB OS page size not to be confused with the
hugepage size.
123695058124.8/4096 => 30198988.8
4. Save backup copy of the /etc/sysctl.conf then update the
file
/etc/sysctl.conf set - kernel.shmall =
30198988
Increase
the SGA and PGA parameters across the database(s) if needed for
performance. Please note the total sum of the sga_target initialization parameter
value across all instances on the node should be less than or equal to the
value of the huge pages of 115.2GB that we are using in this example.
PGA is not
part of the hugepages and can be accordingly however I do recommend leaving at
least 5GB free for the OS.
Please
note memory_target should not be set
which is part of AMM (Automatic Memory Management) as it is not compatible with
huge pages. ASMM (Automatic Shared Memory Management) should be used instead.
The sga_target parameter is compatible with ASMM.
The Oracle
Field Engineer will make changes to the Database compute nodes and add or adjust
the memory configuration and then startup the compute nodes once complete.
Run the free –g command to confirm the memory is
192GB it may say 188GB due to motherboard settings.
To verify
hugepages has been set correctly on the OS please run the following command on
each node. The values below are just an example and do not reflect the settings
we have.
$ cat
/proc/meminfo|grep -i HugePages_Total
HugePages_Total: 58982
For each database please verify in the database alert
log hugepages is being used. The values below is an example from the alert log
file for the instance.
Starting ORACLE instance (normal)
************************
Large Pages Information *******************
Per
process system memlock (soft) limit = UNLIMITED
Total
Shared Global Region in Large Pages = 40 GB (100%)
...