This is often related to your network traffic and that most of the time I find the kernel just uses a single core to send or receive via your NIC.
This can be adjusted like this so that multiple cores are used to send and receive data via your NIC (eth0).
multicore eth0 RX
#enable multicore
echo f > /sys/class/net/eth0/queues/rx-0/rps_cpus
# enable flow and entries count or this will not have an impact
echo 32768 > /proc/sys/net/core/rps_sock_flow_entries
echo 32768 > /sys/class/net/eth0/queues/rx-0/rps_flow_cnt
multicore eth0 TX
echo f > /sys/class/net/eth0/queues/tx-0/xps_cpus
You may not notice if you are not running servers or other high throughput machines but in all stress tests that I've done and on real life high throughput nodes, it has made a huge difference in network performance and stopped high pings due to NIC traffic being pegged to a single core.
Watch it in action:
watch -n 1 'cat /proc/interrupts | grep -E "(CPU|eth|enp|IRQ)"'
ksoftirqd, linux, cpu, usage, solutionthis, kernel, via, nic, adjusted, multiple, cores, eth, multicore, rx, enable, echo, sys, queues, rps_cpus, entries, proc, rps_sock_flow_entries, rps_flow_cnt, tx, xps_cpus, servers, throughput, ve, nodes, pings, pegged, interrupts, grep, quot, enp, irq,