Using information_schema and pg_catalog
Review of schemas (here and here)
Review of roles
Facebook timeline and denormalization:
Postgres and MVCC
select * from pg_locks where transactionid = X;
pgbench: simple transaction simulator
When I run
pgbench -c 25 -j 25 -t 400 pgbench # 25 threads, 400 transactions per thread
tps = 441
That's one transaction in 2.26 ms.
The version conndemo2.py is here.
Experiment 1: run with NUMTHREADS=1, PERTHREAD=1000. Try different isolation levels? Try mysql? Try sorting vs no sorting?
Explain SORTING, REQLOCKS, NOWAIT
Try with (25,40): lots of deadlock
Try with (2,500): still some deadlock (can we reproduce one of these deadlocks manually?)
Let's try asking for the locks upfront, with REQLOCKS, with/without NOWAIT.
NOWAIT false: works, but takes 20-40 sec!
NOWAIT true (and that one failure line commented out): handful of lock failures, but ~10 sec
Now let's enable SORTING. No problems! 250 tps.
250 tps is not bad. But now does Amazon do it? How could you speed this up?
My transactions DB:
Typical transaction based on input custid, part1, quan1, part2, quan2:
insert into invoice(ordertime, custid) values
insert into invitem values (invnum, part1, quan1);
update part set quan_in_stock = quan_in_stock - quan1 where partnum = part1;
insert into invitem values (invnum, part2, quan2);
update part set quan_in_stock = quan_in_stock - quan2 where partnum = part2;