Start with a catalog with around 40000 Images which i want to import from my NAS.
Start an import process which failed after around 2000 images, with broken connection to the database. DB killed with out of memory.
Here I checked what goes on. I use the database for other applications to and run never in this kind of problem.
So I tried a few things to overcome it.
- optimize memory usage in postgres.
- extending memory ( more memory means images imported before db get killed)
- watch the memory consumption on the db server if it goes critical, stop the process in PSU. Restart PSU(to cleanup the session to the DB) continue the process.
nothing really solve the unnatural memory consumption.
If you run the postgresql on windows, the default virtual memory is much larger, so you see here less problems with this. Switching of Swap I think you will get the same behavior as on Linux. Memory on disk means slow performance.
So I checked the statements fired against the database. And then I thought, what goes one here.
Every select-statement I see, is fired as a new prepared statement. Which is complete in memory until is explicitly freed up, or session terminates.
Example of generated query from monitoring the database.
Code: Select all
prepare prepst87755 as select * from idMultiUserNotification where TimeStamp > (to_timestamp(1692187533.798) at time zone 'UTC' ) order by TimeStamp asc
prepare prepst87764 as select * from idMultiUserNotification where TimeStamp > (to_timestamp(1692187551.609) at time zone 'UTC' ) order by TimeStamp asc
prepare prepst87781 as select * from idMultiUserNotification where TimeStamp > (to_timestamp(1692187578.326) at time zone 'UTC' ) order by TimeStamp asc
Until this problem isn't solved, I don't migrate to PSU. I'm a little bit curious about this, doesn't no one use postgres as database? Or do you simple us so much memory to avoid this? I would call this a bug, that's makes the software in this combination unusable.
Ingo