<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">A large, well known and used, node that
runs on a 2G/25GB on a Digital Ocean droplet:<br>
<br>
<font face="monospace">top - 10:42:47 up 607 days, 3:10, 1
user, load average: 0.03, 0.05, 0.00<br>
Tasks: 64 total, 1 running, 63 sleeping, 0 stopped, 0
zombie<br>
%Cpu(s): 3.7 us, 0.3 sy, 0.0 ni, 96.0 id, 0.0 wa, 0.0 hi,
0.0 si, 0.0 st<br>
MiB Mem : <b><font color="#ff0000">1997.9</font></b>
total, 105.2 free, 890.3 used, 1002.4 buff/cache<br>
MiB Swap: 0.0 total, 0.0 free, 0.0 used. 929.7
avail Mem <br>
<br>
Nodes: 26/413 Users [Loc/Clr]: 715/5320 Max: 1223/8258 -
Uptime: 307d 15h 23m<br>
<br>
PID USER PR NI VIRT RES SHR S %CPU %MEM
TIME+ COMMAND <br>
28461 sysop 20 0 814376 800944 4512 S 6.0 39.1
34785:26 perl </font><br>
<br>
Disk usage is a piece of string, depending on how long you want to
keep stuff.<br>
<br>
<font face="monospace"> df -h<br>
Filesystem Size Used Avail Use% Mounted on<br>
udev 991M 0 991M 0% /dev<br>
tmpfs 200M 21M 180M 11% /run<br>
<font color="#ff0000"><b>/dev/vda1 25G 14G 11G 56% /</b></font><br>
tmpfs 999M 0 999M 0% /dev/shm<br>
tmpfs 5.0M 0 5.0M 0% /run/lock<br>
tmpfs 999M 0 999M 0% /sys/fs/cgroup<br>
tmpfs 200M 0 200M 0% /run/user/1000<br>
</font><br>
On said BIG node he is using 14GB out of 25G with fairly
aggressive (debug) log pruning, but on GB7DJK which has all the
spots and logs from 1997 onwards, it is 21GB of data including 11
days of (full) debug files using 4.8GB.<br>
<br>
GB7DJK also had 26 node connections, hence the rather large debug
files. Especially as I have more debugging options open than
"normal" people.<br>
<br>
What I would say is that virtual hosts that use virtual storage
(as opposed to a slice of a physical disk as in the case of e.g.
DO and Hertzner) work poorly when there are many disk accesses per
second. Yes, Google, Amazon and M$ I am looking at you. It's OK on
a small leaf node but anything large will <strike>run</strike>
crawl like a snail on mogadon. <br>
<br>
A "normal" node with up to 10 node connections and up to 400 odd
concurrently connected users will work fine in a 1GB/20GB droplet.
You need some headroom for spawning commands (eg sh/dx sh/log etc)
and it is the need for this that would mean going up to 2GB RAM
once one gets more than about 600 connected users. <br>
<br>
Hope this is helpful<br>
<br>
73 Dirk G1TLH<br>
<br>
<br>
On 29/01/2024 03:11, Christopher Schlegel via Dxspider-support
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAAh2dv2kZTw2nVoau8JvBKj9+BeGiPw-JNFcnaiUXnmH9cZXXQ@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="auto">Hi all,
<div dir="auto"><br>
</div>
<div dir="auto">I currently have my, rather new, node hosted as
a virtual machine on one of my servers. I'm curious as to what
kind of RAM usage and disk usage other sysops are seeing on
their established nodes. </div>
<div dir="auto"><br>
</div>
<div dir="auto">Any help would be appreciated so I can get the
necessary resources squared away before I announce to my club
members.</div>
<div dir="auto"><br>
</div>
<div dir="auto">73,</div>
<div dir="auto"><br>
</div>
<div dir="auto">Chris, WI3W</div>
<div dir="auto"><br>
</div>
</div>
<br>
<fieldset class="moz-mime-attachment-header"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
Dxspider-support mailing list
<a class="moz-txt-link-abbreviated" href="mailto:Dxspider-support@tobit.co.uk">Dxspider-support@tobit.co.uk</a>
<a class="moz-txt-link-freetext" href="https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support">https://mailman.tobit.co.uk/mailman/listinfo/dxspider-support</a>
</pre>
</blockquote>
<br>
</body>
</html>