Important Update: This article was originally posted back in 2014. However, as I posted in a blog post later in 2017, does your Linux server need a RAM upgrade? Let’s check with free, top, vmstat and sar…there was a Linux kernel change to address 2016, (Tea
Their change should inspire New Relic and others to follow the way they report memory use, New Relic has changed.) Now marked as memory available By the Linux kernel: “An estimate of how much memory is available to start new applications, without swapping. Unlike the data provided by the cache or free fields, this field takes into account the page cache and also all The reclaimable memory slab will not be reclaimed because of the objects in use (MemAvailable in /proc/meminfo, available on kernel 3.14, emulated on kernels 2.6.27+, otherwise the same as free).
kernel.org: /proc/meminfo: Provide estimated available memory
“Many load balancing and workload keeping programs check to estimate how much free memory is available. They usually do this by adding “free” and “cached”, which was fine ten years ago but seems to be wrong today. This is incorrect because cached includes memory that is not free as page cache, for example, shared memory segments, tmpfs and ramfs. It does not include reclaimable slab memory, which is very On most inactive systems with all files, the system can take up a large chunk of memory. Currently, the amount of memory available for a new workload is estimated MemFree, Active(file), Inactive(file) without pushing the system into swap , and SReclaimable as well as a “less than” watermark. the amount of memory to be estimated. It is more convenient to provide such an estimate in /proc/meminfo. If the future things change in the U.S., we only have to change it in one place…” – Source,
Original article: what is from the below screenshot new relic look familiar? Let’s say you have a web server with 2GB of RAM or maybe 256GB. Your web apps are running slow, so you may want to check out New Relic and/or other server monitoring and APM tools but unfortunately no red flags are showing. The swap seen above might worry you a bit, but you say… “Hey, there’s a lot of free space left, right!?” Technically, yes, but as it pertains to Linux web server performance, no, not at all. let’s discuss.
Linux Desktop Memory Usage vs Linux Web Server Memory Usage
On your Linux-based home computer or laptop, you’ll be running Ubuntu, Linux Mint, Debian, or perhaps Fedora, which is my favorite desktop distro. My laptop’s uptime shows 5 days, 11 hours and 2 minutes. I use the standby feature a lot and have been active till 30+ days ago. I mention this because the average user probably avoids maintaining their home system for so long. After all, they are not web servers! 🙂 If you keep them running, it’s really better. Why here?
Let’s say you have 8GB of installed RAM on your laptop and before today; You used GIMP (Image Editor), Chrome and LibreOffice. Chances are, unless you do heavy computing later using other applications or restart your system, Linux will buffer a lot of necessary files and paths into cache and RAM (memory). This is very useful because if for some reason you decide to edit the photo again, browse the web or open a new file in LibreOffice, all of these functions will open and work much faster the second time around. This is because they were cached (temporarily saved) in memory. Over time you may no longer be able to use GIMP or LibreOffice for a while and gradually, the Linux kernel will replace those cached files with data from your new apps. This is perfectly fine because you don’t need to store old and unused files in memory from hours, days, or even weeks ago, especially if you don’t have extra system memory.
However, Linux servers are different, very different. The same file is requested over and over again at different rates throughout the day. Often files are requested several times per minute or several times per second on busy servers. So how often do you want to delete those files and paths from the cache? With Linux web servers, we want to keep cached (and buffered) data for as long as possible. so long that cache pressure to delete files to make room for new files causes Linux kernel cached memory instead of disk (very slow!) Looking at the above memory usage graph again, most of that white space “Useful” are cache and buffer.
Cached memory is well defined by Jonathan Diprizio techthrob.com,
“cached” [memory] RAM is the amount of memory used to hold copies of the actual files. When there are files that are constantly being written or read, keeping them in memory reduces the need to work on a physical hard disk; As a result, you can significantly improve performance by using memory for the cache.”
With this in mind, we should find out the correct memory size for web servers; Otherwise, the kernel will start delivering more and more cached data to the disk. Disk storage, even SSD (Solid-State Drive), is noticeably slower than RAM!
Ordering ‘FREE’ will never let you down!
From Linux command line, using free order or (or free – m either free) will often reveal that you are “using” more memory than you think! see this example from below red hat document:
$ free total used free shared buffers cached Mem: 4040360 4012200 28160 0 176628 3571348 -/+ buffers/cache: 264224 3776136 Swap: 4200956 12184 4188772
Note that 28160KB is “free”. However, down that line, see how much memory is consumed by buffers and caches! Linux always tries to use memory first to speed up disk operations by using available memory for buffers (file system metadata) and caches (pages containing the actual contents of files or block devices). This helps the system to run faster as the disk information is already in memory which saves the I/O operations. If more space is required, Linux will free up buffers and cache to generate memory for applications. If there is not enough “free” space, the cache will be saved (replaced) to disk. It would be wise to monitor this, keeping the swap and cache contention within an acceptable range that does not affect performance. , Source: Red Hat,
Take a look at the screen capture below. This time remember that the white/unshaded area (under “Physical Memory”) is largely used up by the caches and buffers that your web server relies on to maintain the speedy performance you want. Huh. Note the effect of swapping in this case: increased disk IO latency resulting in CPU performance blocking or i wait, The fix for this webserver at the time did not include a memory upgrade, but rather to recover memory by reconfiguring MySQL, which was misconfigured in the direction of “bigger is always better” and caused unused MySQL databases. Even by deleting a group.
Now, please don’t take it away that I’m suggesting “eliminating” swap entirely. 🙂 No, swap has its place and purpose. Instead, it would be best to find that balance so that swapping doesn’t get in the way of throughput/performance. There are many tools and services out there for monitoring web server memory usage. Although correct, the last two graphs can be misleading to those who rely on “percentages used” in decision making. Be sure to check the server status whenever there is frequent swapping.
Swapping isn’t always a bad thing!
Opportunistic swapping is helpful for performance! This is when the system has nothing better to do, so it saves cached data that hasn’t been used for a long time to disk swap space. The nice thing about opportunistic swapping is that the server still keeps and serves a copy of the data in physical memory. But if things get busy later on and the server needs that memory for something else, it can delete them without performing extra untimely Writes to disk. it’s healthy! That said, if you have a lot of RAM, there will be less swapping. For example, the 64GB server below is with 5GB of free memory (about 8%) for 60+ days, with no swap used:
root@server [~]# free -m total used free shared buffers cached Mem: 62589 57007 5582 0 1999 31705 -/+ buffers/cache: 23302 39287 Swap: 1999 0 1999
Here’s what 8% free memory looks like on New Relic:
…disk IO usage is beautiful.
Graphing Linux Web Server Memory Usage With Munin
Finally, let’s see Munin’s open-source monitoring tool, In the graph below, the cached memory is actually labeled as “cache” by Munin. in this instance – Click the image to enlarge – The server is healthy and swapping has no effect on performance because the size of the cache and buffers is large enough for the kernel to selectively swap to disk. The ratio of used memory to cached/buffered memory shows that this server is caching more than 3 times the “used” memory. On web servers, there is no hard and fast rule for the recommended size of the cache. This will vary from case to case. That said, 50% or more of your RAM used by the cache is great for performance!
Yet another example, 30GB webserver no swap. We can probably play with the kernel sysctl configuration here…
Remember, unlike with Linux desktops, with web servers, a much higher percentage of already accessed files are requested over and over again. So while seeing cached memory as “free” on a desktop install is fine, this is not the case with a web server. Let’s keep them cached!
Another example of counting cached/buffered memory as free vs used
Last month, a server suffered a MySQL failure due to out of memory, which limited all available swaps. Here’s Munin’s report of memory usage (and also commit):
Here is New Relic’s report on memory usage for the same period. It shows interchangeability, but also what appears to be a healthy amount of white space and only 50% is “used”:
The server owner was thus unaware of the extent of the problem. If Linux only uses swap when it is out of memory, this New Relic graph may not be confusing to some, but as we discussed earlier above, this is not the case, thus making it difficult for non-admins to rely on tools that consistently show 50% free space. therefore, Are you “correctly” measuring Linux web server memory usage?
1st post: 2014 / Last Updated: November 11, 2018