Benchmarking the insertion of 8333 post journal entries

Petter Reinholdtsen pere at hungry.com
Sat May 20 07:53:22 CEST 2017


[Thomas Sødring]
> Good stuff! Let's be honest, that's not good reading.

Yeah.

> The amount of data being stored is very small so 41 KiB per entry is
> strange. If you are using H2 (in memory db) then I understand that
> there is a cost as indexes etc are being created. But we should also
> test on top of mysql/postgres. I'm not sure what you have persisted
> the data to.

I'm using the memory DB.  I simply run 'make' to start the server.

> Maybe I should add a description to get mysql integration working, it
> should work out of the box but it's been so long since I looked at the
> code.

Right.  Personally I would prefer to get PostgreSQL, as it is the
database integrated into the infrastructure at UiO (monitoring, backup,
etc).  Do you have a preference for mysql?

I redid the benchmark on my laptop, this time with less debug output,
and it was slightly quicker.  This time it could insert 2.8 records per
second, ie around 27% quicker than the previous run.  This time top
reported the RES size increased 27 KiB per record.  Note, top is not a
very good tool for discovering memory usage, so take that number with a
grain of salt.  My machine is short on memory, so I suspect a lot of
time is spent swapping.

Here are my raw numbers:

The run took 49m19.220s wall time adding 8333 entries.

These were the top lines at the start:

16392 nobody 20 0 5584048 909240 17768 S 2.0 11.5 1:17.16 java
16152 nobody 20 0 4450724 346284 15856 S 0.0  4.4 0:22.17 java

and these were the top lines at the end:

16392 nobody 20 0 5607636 1.085g 0 S 0.7 14.4 18:53.06 java
16152 nobody 20 0 4450724 330796 364 S 0.0 4.2 0:33.51 java

-- 
Happy hacking
Petter Reinholtsen


More information about the nikita-noark mailing list