How Numa Allocates Memory

According to MOS, In a NUMA system, processors, memory and I/O are group together into nodes so that each processor is bound to a specific memory address.  By default a NUMA system will chose the local node to allocate memory from and totally exhaust all of the memory from that node before deciding to allocate memory from other remote NUMA nodes.  While this results in holding an object that will fit in a single NUMA node and avoid fragmentation, it can also result in aggressive swapping on one node while there is plenty of memory on other nodes.

It's strongly recommended to evaluate the performance and perform sufficient testing on the NUMA settings.  In our environment, we disable it. If NUMMA is disabled, it's noted by the kernel:

dmesg | grep -i numa

Command line: ro root=/dev/sysVG/rootLV numa=off crashkernel=128M@16M
NUMA turned off
Kernel command line: ro root=/dev/sysVG/rootLV numa=off crashkernel=128M@16M


Alternatively,

 - root: cat /proc/cmdline
ro root=/dev/sysVG/rootLV numa=off crashkernel=128M@16M

cat /etc/grub.conf
 kernel /vmlinuz-2.6.18-238.9.1.el5 ro root=/dev/sysVG/rootLV numa=off crashkernel=128M@16M


  - root: numactl --hardware
 available: 1 nodes (0)
 node 0 size: 145416 MB
 node 0 free: 46749 MB
 node distances:
 node   0
  0:  10

Starting 11g Release 2, _enable_NUMA_support default is set to FALSE.

  1  select a.KSPPINM "Parameter", b.KSPPSTVL "Session Values"
  2  , c.KSPPSTVL "Instance Value"
  3  from x$ksppi a, x$ksppcv b, x$ksppsv c
  4  where a.INDX = b.INDX
  5* and a.KSPPINM = '_enable_NUMA_support'

_enable_NUMA_support
FALSE


No comments:

Post a Comment