No probs.
If that does not help i re-suggest use pgbench to test and possibly tune your database.
It will easily show you if you have an issue in your network as well or if the db instance itself is sluggish.
A quick setup would be :
On your workstation (so you include the network) make sure you have a
postgresql client installed , which includes pgbench.
Then
1. Log on to the instance with psql and create an empty database named eg benchmark
- Code: Select all
<path to client binaries>/psql --host=<ip of server> --port=5432 -U postgres
pqsql# CREATE DATABASE benchmark;
pgsql#\q
2. Run pgbench to initialise this test database
- Code: Select all
<path to client binaries>/pgbench --host=<ip of server> --port=5432 -U postgres -i -s 50 benchmark
3. Run a simple base benchmark . This example runs with 10 simulated (pgbench)clients , 2 threads and 10000 transactions.
- Code: Select all
<path to client binaries>/pgbench --host=<ip of server> --port=5432 -U postgres -c 10 -j 2 -t 10000 benchmark
example output from a laptop calling a small mac mini installed shared database over wifi (so network worst case test, but can work comfortable on resolve on my laptop even with this):
- Code: Select all
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average: 24.220 ms
tps = 412.889766 (including connections establishing)
tps = 412.908226 (excluding connections establishing)
now running it on the db server (macmini) itself so you can see the network effect even more clearly:
- Code: Select all
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average: 0.000 ms
tps = 3263.303174 (including connections establishing)
tps = 3263.790093 (excluding connections establishing)
You can even make the network more involved (and so enhance any issues if there) in the test by adding the -C option , which will force a reconnection on "every" transaction (see latency average shooting up and tps down and specialy the diff in tps between including and excluding connection establishing).
- Code: Select all
<path to client binaries>/pgbench --host=<ip of server> --port=5432 -U postgres -c 10 -j 2 -t 10000 -C benchmark
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average: 75.181 ms
tps = 133.011475 (including connections establishing)
tps = 156.401047 (excluding connections establishing)
and here as a baseline from a 1GB wired connected workstation (with extended network option -C ) :
- Code: Select all
starting vacuum...end.
transaction type: TPC-B (sort of)
scaling factor: 50
query mode: simple
number of clients: 10
number of threads: 2
number of transactions per client: 10000
number of transactions actually processed: 100000/100000
latency average: 35.431 ms
tps = 282.235230 (including connections establishing)
tps = 337.755780 (excluding connections establishing)
So lets say network is not an issue. A common cause for slow DB is a very/too small configured shared memory buffer. A common default is 128MB. Can still be enough pending usage , just play with it.
The configuration is in postgresql.conf and the parameter you are looking for is shared_buffers.
Do your pgbench test , adjust it, restart postgresql and rerun the pgbench test to see if it changes positively. Make small to medium adjustments and retest until you are happy.
Also double-check max_connections (number of maximum simultaneous db connections) but if you have only a few workstations it is likely ok (have set mine to 100 just to be sure, but have another (non resolve) app that is using up to 50 connections at once)
edit : extra remark regarding network. Make sure that you always use the "ip adress" of the database server in your resolve connection configs and not hostnames. This will avoid issues and delays in the hostname resolve process (dns), which can be a hidden problem i have see a lot that can lead to massive delays on each network request.
edit2: just to prove my own point , i upped my own shared_buffers (which was set back to default after some maintenance i did in the past and aparantly never noticed) from 128MB to 512MB and my local server tps went up from
- Code: Select all
from before :
tps = 3263.303174 (including connections establishing)
tps = 3263.790093 (excluding connections establishing)
to after :
tps = 4443.717255 (including connections establishing)
tps = 4444.516070 (excluding connections establishing)
rem: my numbers are relatively low compared to possible on that hardware as the database is part of a realtime synchronisng cluster updating a second db. So that will reduce the numbers a lot, but the idea is the same. Just check on your local machine the relative numbers as discussed above Also the macmini is hosting another virtual machine. Just goes to show you dont need that much for a proper postgresql db.
Beatstep & APC-40 Resolve Edition Controllers https://posttools.tachyon-consulting.com
Test Rig : 2xXeon (24c) | UNRAID KVM OSX VM's | 128GB | 5700XT | 40Gbe
Prod Rig : i9-7940X (14c) | OSX 10.15 | 64GB | 2xVega 56 | 40Gbe | Tb3 | V:Eizo | A:5.1RME