Comp 305-001/488-305: Database Administration

Spring 2018: Tuesdays, 4:15-6:45, Comm 013

Week 13

April 17




Normalization

    Facebook timeline and denormalization:
    https://www.facebook.com/note.php?note_id=10150468255628920



    select * from pg_locks where transactionid = X;

My transaction demo

The version conndemo2.py is here.

Typical transaction based on input custid, part1, quan1, part2, quan2:

insert into invoice(ordertime, custid) values (now(), custid);
insert into invitem values (invnum, part1, quan1);
update part set quan_in_stock = quan_in_stock - quan1 where partnum = part1;
insert into invitem values (invnum, part2, quan2);
update part set quan_in_stock = quan_in_stock - quan2 where partnum = part2;  

    My transactions DB:


Actual transactions have ten items, making ten parts updates.

The "insert into invoice" is not the problem, unless the same customer places dozens of orders per second. The "insert into invitem" is also generally safe, as there should never be more than one transaction operating on a given invnum (why?). It's the "update part" that is the issue, as this must lock that record of the part table.

Experiment 1: true serial execution

Run conndemo2.py with NUMTHREADS=1, PERTHREAD=1000.

This resulted in about 100-110 tps. The problem here is that each transaction does 10 updates to part. (Also 10 updates to invitem.) This is why this is slower than the pg_bench value of ~440 tps.

Experiment 2: lots of parallelism

Try with (NUMTHREADS, PERTHREAD) = (25,40). We get lots of deadlock

Try with (2,500): still some deadlock


Experiment 3: Requesting locks

Let's try asking for the locks upfront, with REQLOCKS=True. We can ask for a lock on a specific record (with partnum = %s, where %s here serves as a variable) with

    select partnum from part where partnum = %s for update;

According to the Postgres manual, "FOR UPDATE causes the rows retrieved by the SELECT statement to be locked as though for update. This prevents them from being locked, modified or deleted by other transactions until the current transaction ends."

We can, by the way, lock an entire table (eg part) with the following

If NOWAIT is false, then  each transaction waits for each lock in turn. If a lock fails, the request deadlocks, and we start the lock request over.

This works, but takes 20-40 sec!

In this scenario, when a transaction can't get its locks, it aborts (with a deadlock!) and tries again immediately. But for each lock, it waits to see if it will be granted, or if the existing locks will lead to a deadlock. Should we wait between tries a little bit? If so, how long? Should it put the transaction back to the end of that thread's queue? We don't really have a queue. (Note that the lock requests may deadlock, but then we just handle the exception, silently, and try again.)

Try enabling that print statement in the first exception handler

What happens when we reduce RETRYCOUNT? We can go to 5, and it still usually works.

NOWAIT true (and that one failure line commented out): handful of lock failures, but ~10 sec

In this scenario, we ask for the locks in rapid succession, but if any lock can't be granted immediately, we release all locks and immediately try again.

We now need a much higher value of RETRYCOUNT; at least 20. Should we make it really big?

But look what happens to the total time! Much better! ~100 tps.

Changing to REPEATABLE READ does not make much difference.


Experiment 4: Sorting the updates

Now let's enable SORTING. No problems! 250 tps. Why does this work?

Try:

250 tps is not bad. But now does Amazon do it? How could you speed this up?

Experiment 4.5: try the above but with REPEATABLE READ.


Experiment 5: table partupdate.

This has the following definition:

create table partupdate (
    invnum  integer,
    partnum integer,
    quan_sold integer,
    constraint partupdate_invnum  foreign key (invnum) references invoice(invnum),
    constraint partupdate_partnum foreign key (partnum) references part(partnum)
);

Instead of updating the quan_in_stock entry of table part, we create in table partupdate a record (invnum, partnum, quansold). We will then update part.quan_in_stock from this partupdate record at some later date, as part of a "batch" update to table part.

This flirts with atomicity failure, in that part.quan_in_stock may not be accurate. However, if the partupdate record is safely written, we shouldn't risk loss of updates.

The point of all this is that appending records to partupdate should not require locks. Because we include invnum, two different transactions should never be trying to insert the same record.

But we're still at around 4000 ms for 1000 transactions (250 tps).

My file here is conndemo3.py.

Every so often we need to run the following transaction to migrate all the partupdate data into part: 

begin;
with sums as
(select p.partnum, sum(p.quan_sold) as numsold from partupdate p where p.partnum = PARTNUM group by p.partnum)  -- summarize partupdate data
update part p set quan_in_stock = quan_in_stock - (select s.numsold from sums s where s.partnum = p.partnum) from sums where p.partnum = sums.partnum;

delete from partupdate;
commit;

Note that we do not need a "foreach partnum that has seen some sales, update part ...." loop.


Experiment 6: table locks

If each transaction locks the entire parts table with

    lock part in exclusive mode;

then the 1000 transactions (25 threads) takes 7-8 seconds, for about 125-135 transactions per second. Exclusive table locks, in other words, do slow things down, versus the best version we could do with row locks. The file is cdtablelock.py.



While we're at it, here are two queries that do the same thing:

-- really SLOOOW query:
select distinct pu.invnum from partupdate pu where (select count(*) from partupdate p where p.invnum = pu.invnum) <> 10 order by pu.invnum limit 200;

-- faster:
select pu.invnum from partupdate pu group by pu.invnum having count(*) <> 10 order by pu.invnum limit 200;

The second took 10 ms; the first took 92403 ms.