postgresql delete优化

前端之家收集整理的这篇文章主要介绍了postgresql delete优化前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

最新需要整理数据库

因为有些表已经达到了2千多万行数据,并且有相当一部分数据是几年以前的

所以那些数据,对于用户来说是没有必要,所以不得不考虑清除部分过期的数据


但是发现当我使用delete时,发现删除数据很慢,语句如下

delete from tablename where id <500000


sql 上发现,很难有优化的地方了,这种语句已经相当的精炼了

并且查询了下,发现主键已经建立了索引


后来几经查询,终于找到了解决方案(还是搜索国外的好)

原来,我这个表用到了外键,而外键表又用到了外键表,所以是个嵌套的表


所以只需在表的层面调用

ALTER TABLE mytable DISABLE TRIGGER ALL;

删除之后再调用

ALTER TABLE mytable ENABLE TRIGGER ALL;

删除数据从几十分钟,缩短到1分钟之内,终于达到了可接受的范围
摘自一段老外的描述:

The usual advice when you complain about slow bulk deletions in postgresql is "make sure the foreign keys (pointing to the table you are deleting from) are indexed". This is because postgresql doesn't create the indexes automatically for all the foreign keys (FK) which can be considered a degree of freedom or a nuisance,depends how you look at it. Anyway,the indexing usually solves the performance issue. Unless you stumble upon a FK field that is not indexable. Like I did.

The field in question has the same value repeated over thousands of rows. Neither B-tree nor hash indexing works so postgresql is forced to do the slow sequential scan each time it deletes the table referenced by this FK (because the FK is a constraint and an automated check is triggered). Multiply this by the number of rows deleted and you'll see the minutes adding up.


删除多余的记录之后

最好做一个vacuum和reindex这2个操作,会让表的物理空间更加优化

REINDEX TABLE tablename

vacuum FULL tablename


经过这件事,让我觉得

你用google搜索对应的外文解决方案一定要输对对应的key

对于这个google的关键字为:

postgres bulk delete

或者

postgres optimize delete


最后附上链接

http://od-eon.com/blogs/stefan/optimizing-particular-case-bulk-deletion-postgresq/ http://www.linuxinsight.com/optimize_postgresql_database_size.html

原文链接:https://www.f2er.com/postgresql/195959.html

猜你在找的Postgre SQL相关文章