[Dxspider-support] RAM and Disk Usage

Dirk Koopman djk at tobit.co.uk
Mon Jan 29 11:08:03 GMT 2024


A large, well known and used, node that runs on a 2G/25GB on a Digital 
Ocean droplet:

top - 10:42:47 up 607 days,  3:10,  1 user,  load average: 0.03, 0.05, 0.00
Tasks:  64 total,   1 running,  63 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.7 us,  0.3 sy,  0.0 ni, 96.0 id,  0.0 wa,  0.0 hi, 0.0 si,  
0.0 st
MiB Mem : *1997.9* total,    105.2 free,    890.3 used,   1002.4 buff/cache
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    929.7 avail Mem

Nodes: 26/413 Users [Loc/Clr]: 715/5320 Max: 1223/8258 - Uptime:  307d 
15h 23m

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM TIME+ COMMAND
28461 sysop     20   0  814376 800944   4512 S   6.0  39.1 34785:26 perl

Disk usage is a piece of string, depending on how long you want to keep 
stuff.

  df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            991M     0  991M   0% /dev
tmpfs           200M   21M  180M  11% /run
*/dev/vda1        25G   14G   11G  56% /*
tmpfs           999M     0  999M   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           999M     0  999M   0% /sys/fs/cgroup
tmpfs           200M     0  200M   0% /run/user/1000

On said BIG node he is using 14GB out of 25G  with fairly aggressive 
(debug) log pruning, but on GB7DJK which has all the spots and logs from 
1997 onwards, it is 21GB of data including 11 days of (full) debug files 
using 4.8GB.

GB7DJK also had 26 node connections, hence the rather large debug files. 
Especially as I have more debugging options open than "normal" people.

What I would say is that virtual hosts that use virtual storage (as 
opposed to a slice of a physical disk as in the case of e.g. DO and 
Hertzner) work poorly when there are many disk accesses per second. Yes, 
Google, Amazon and M$ I am looking at you. It's OK on a small leaf node 
but anything large will run crawl like a snail on mogadon.

A "normal" node with up to 10 node connections and up to 400 odd 
concurrently connected users will work fine in a 1GB/20GB droplet. You 
need some headroom for spawning commands (eg sh/dx sh/log etc) and it is 
the need for this that would mean going up to 2GB RAM once one gets more 
than about 600 connected users.

Hope this is helpful

73 Dirk G1TLH


On 29/01/2024 03:11, Christopher Schlegel via Dxspider-support wrote:
> Hi all,
>
> I currently have my, rather new, node hosted as a virtual machine on 
> one of my servers. I'm curious as to what kind of RAM usage and disk 
> usage other sysops are seeing on their established nodes.
>
> Any help would be appreciated so I can get the necessary resources 
> squared away before I announce to my club members.
>
> 73,
>
> Chris, WI3W
>
>
> _______________________________________________
> Dxspider-support mailing list
> Dxspider-support at tobit.co.uk
> https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.tobit.co.uk/pipermail/dxspider-support/attachments/20240129/cb97956c/attachment.htm>


More information about the Dxspider-support mailing list