VACUUM ANALYZE taking a long time, %I/O and %CPU very low
This guy was seeing the same behaviour as I was: VACUUM ANALYZE was taking forever, and CPU and I/O percentages were hovering around 0. He had the "vacuum_cost_delay" parameter set to 70, which means that Postgres will go to sleep for 70ms when it determines that the I/O costs have exceeded a certain limit ("vacuum_cost_limit"). Since a 25M row table isn't going to fit into memory, there's going to be a good deal of reading in blocks from the disk, and thus you're going to regularly exceed your delay threshold.
Somehow I had set my delay to 500ms. No wonder it was taking so long. I dropped it down to 0, effectively disabling the cost-based delay feature. Now, 10 minutes later, my table has been vacuumed and analyzed.
Now, you can use the autovacuum daemon to vacuum your tables, and the pg_autovacuum table (where you specify table-specific vacuum parameters) will let you set a value for vacuum_cost_delay. Thus, you can set the attribute "vac_cost_delay" to 0 to get quick autovacuums of your big tables, while still allowing you to set a system-wide vacuum_cost_delay for other smaller, less critical tables. It looks like if you manually kick off a vacuum, though, it still uses the system-wide defaults, instead of the values from pg_autovacuum (why?). Since you can set vacuum_cost_delay without reloading the server, if you need to do a manual vacuum, do a
SET vacuum_cost_delay = 0;first (or something higher than 0 if you can't afford to peg your disk I/O), and then VACUUM (remembering to set vacuum_cost_delay back to what it was afterwards!). If you do this from the commandline, you might want to write a small wrapper script that will do this instead of running vacuumdb.
The lesson here? Always read the directions, kids.
No comments:
Post a Comment