![Gmod workshop addons](https://cdn2.cdnme.se/5447227/9-3/5_64e61dfa9606ee7f6350b87c.png)
![chr proxmox chr proxmox](https://luciano.files.wordpress.com/2020/06/image-3-1.png)
The chr should mainly be running in ram, correct? So that should eliminate ceph as a problem? The nics aren't over taxed, but I could pass them through directly if that'd help, I can also get traffic off the shared nic to their own dedicated (would have to to pass it through anyhow), is that the most likely culprit? The servers see about 10% cpu and about 40% ram utilization, ceph doesn't seem over taxed, but neither are the nics - they are x520 dual 10gb nics, we are passing maybe 1.5gb total traffic. Packets come in on the share interface and leave on a dedicated.
![chr proxmox chr proxmox](https://meterpreter.org/wp-content/uploads/2020/07/Proxmox-Backup-Server-Dashboard.png)
We made a lot of changes all at Once and ended up causing some problems, I'm not sure where to start on this: we Have significant packet loss and late packets when they go across the virtual router. We also were running hardware routers (er8-pro). Prior to this we were running a pair of r720s with zfs and replication(the servers were directly connected on the 10gb interfaces for replication) and a 1gb switch (es24-250w) This seemed to have better disk io but didn't provide the shared storage or HA we needed. We also installed a crs328 as the backbone for the 3 servers. They have 2 dedicated nics and a shared nic. We Have installed a pair of CHRs with multiqueue set to 8, and numa enabled.
![chr proxmox chr proxmox](https://linuxiac.com/wp-content/uploads/2021/04/proxmox-ve.png)
The SSDs Have primary affinity of 1 while the HDDs are set to 0. My current setup is 3 r720s each with 3x 2tb SSDs, and 5x 2tb HDDs running Ceph with a dedicated 10gb nic - everything is on a replication pool with size of 3, min size of 2. The problem: packet loss and late packets.
![Gmod workshop addons](https://cdn2.cdnme.se/5447227/9-3/5_64e61dfa9606ee7f6350b87c.png)