Replies: 1 comment 1 reply
-
I'm not at all familiar with Postgres, but I can provide some input based on what you're suggesting.
That seems like a dangerous approach--what if your database suddenly gets an influx of queries? You'll then be running an update to the table whenever another query comes in (and maybe even cause an infinite loop if an update to Links via your trigger causes it to trigger again). Instead, if Postgres offers a TTL (time to live) option, I'd use that. If not, I'd use a cron job and a "dirty" option or something similar to indicate that the link is actually deleted (also known as soft deletion). Then, in your queries, you can filter based on
Databases can often be an expensive/hard to scale part of applications--make sure to not overwork it too much 😉 |
Beta Was this translation helpful? Give feedback.
-
Hi, I'm discovering PosgreSQL universe and seems fascinating. If you know about it, I want to ask you if you like the idea of having the most part of a backend embeded on the Database. RLS seems great, but I'm referring to integate functions and triggers.
I give you a simple example: I define a
Links
table. Those links can be permanent or with a limited life. In this last case I'm thinking of using this approach that ChatGPT 3.5 gave me:https://chat.openai.com/share/3be935b5-ecb8-4d89-82b0-eba529b3ba50 (4th question)
The idea is to use pg-cron to remove expired links every hour and to remove expired links whenever any event happens on
Links
table. Using functions and triggers.This design implies that my backend server delegates a lot of logic to the database and that has to communicate and receive (async) events from the database.
Beta Was this translation helpful? Give feedback.
All reactions