

Secondly, grouping containers in Kubernetes Pods ends up creating tons of mongos processes, resulting in additional overhead. The mongos process is not cgroups aware, which means it can blow up the CPU usage creating tons of TaskExecutor threads. This parameter needs to be extensively tested! Pitfalls for mongos in containers At this point, it might be necessary to reduce the number of tickets to the current number of CPUs/vCPUs available (if the server has 16 cores, set the read/write tickets for 16 each). Note that sometimes increasing the level of parallelism might lead to an opposite effected than desired in an already loaded server. Again, PMM is suitable for this situation: WiredTigerConcurrentWriteTransactions : 256Īnd to estimate is necessary to observe the workload behavior. You can set this by adding the following lines to the /etc/nf:

( Red Hat recommends lower ratios of 10 and 3 for high-performance/large-memory servers.) The goal is to reduce memory usage without impacting query performance negatively.Ī recommended setting for dirty ratios on large-memory ( 64GB+) database servers is: vm.dirty_ratio = 15 and vm.dirty_background_ratio = 5, or possibly less. It is recommended to lower this setting and monitor the impact to query performance and disk IO. The background ratio won’t kick in until 12.8GB. For example, on a 128GB-memory host, this can allow up to 38.4GB of dirty pages. To avoid the hard pause, there is a second ratio: dirty_background_ratio ( default 10-15%) which tells the kernel to start flushing dirty pages to disk in the background without any pause.Ģ0-30% is a good general default for “dirty_ratio,” but on large-memory database servers, this can be a lot of memory. When you exceed the limit, the dirty pages are committed to disk, creating a small pause. The default on most Linux hosts is between 20-30%. The dirty_ratio is the percentage of total system memory that can hold dirty pages. However, Ovais Tariq details a known bug ( or feature) when using a setting of “0”. It is common to see a value of “ 0″ ( or sometimes “10”) on database servers, telling the kernel to prefer to swap to memory for better response times. The Linux default is usually 60, which is not ideal for database usage. A setting of 100 determines it to swap aggressively to disk. A setting of “0” tells the kernel to swap only to avoid out-of-memory problems. Swappiness is a Linux kernel setting that influences the behavior of the Virtual Memory manager when it needs to allocate a swap, ranging from 0-100. Without further ado, let’s start with the OS settings. Note that the intent of tuning the settings is not exclusively about improving performance but also enhancing the high-availability and resilience of the MongoDB database.
#Mongodb mac os x soft rlimits too low series#
Spoiler alert: this post focus on MongoDB 3.6.X series and higher since previous versions have reached its End-of-Life (EOL). The main objective of this post is to share my experience over the past years tuning MongoDB and centralize the diverse sources that I crossed in this journey in a unique place.

In this blog post, we will discuss the best practices on the MongoDB ecosystem applied at the Operating System (OS) and MongoDB levels.
