NewsLab
Apr 28 20:36 UTC

Pgbackrest is no longer being maintained (github.com)

445 points|by c0l0||225 comments|Read full story on github.com

Comments (225)

120 shown|More comments
  1. 1. philipallstar||context
    Sorry to hear this. Well done for maintaining a successful project for so long.
  2. 2. timwis||context
    Really sad to see this. I had only recently learnt about this project, and was really impressed by it. I was planning to set it up this weekend (via autobase). I've also been under the impression that it's likely to be what powers the backups in RDS, Cloud SQL, etc., but I may have misunderstood.
  3. 3. oulipo2||context
    Waiting for all the C-level execs saying that "anyway this is not needed, we're going to vibe-code a solution to our production database backups" lol
  4. 4. absynth||context
    The backups will then be hyper-optimized from three hours down to 5 minutes using devnull compression technologies. Its super effective!
  5. 5. duskdozer||context
    Why even waste all this time and money on backups in the first place? Just don't make mistakes.
  6. 6. theandrewbailey||context
    Only for their AI to delete the production database and all the backups, and be forced to write an apology.

    https://news.ycombinator.com/item?id=47911524

  7. 7. dzonga||context
    The A.I will probably steal the code and make it an unmaintainable mess that deletes backups when someone tries to restore
  8. 8. evertheylen||context
    Ah, sad to read this. Does anyone know of good alternatives?
  9. 9. DeathArrow||context
    Postgres has built-in backups starting with version 18.
  10. 10. evertheylen||context
    From what I can find Postgres 17 [1] introduced incremental backups to pg_basebackup, refined in 18, but nowhere near the full featureset of pgBackRest. Is that what you meant? Having builtin incremental replication to a S3-compatible storage would be great.

    [1]: https://www.postgresql.org/docs/release/17.0/#:~:text=pg%5Fb...

  11. 11. whateveracct||context
    doesn't it still work?
  12. 12. evertheylen||context
    Yes! But I'm assuming it will prevent me from upgrading to Postgres 19 in the future.
  13. 13. whateveracct||context
    I'm not familiar with the internals, but is backing up that coupled to Postgres version? That feels so brittle to me.
  14. 14. indigo945||context
    You can of course take a SQL dump that is version-independent, but if you're serious about creating backups, you want to take backups of the actual on-disk format of the WAL, because that's more efficient and also the only practical way to get point-in-time recovery. (For the efficiency, you could alternatively also take ZFS snapshots, which will work independently of the Postgres version, but those also don't give you PITR.) The WAL format is a Postgres implementation detail and therefore tools wanting to read and write it need maintenance whenever the format changes (which can happen on major version releases).
  15. 15. hleszek||context
    Why not try to find a successor instead of archiving the repo and forbidding the use of the name? I'm sure with a 3.8k stars repo you'll find competent people willing to continue the work.
  16. 16. c0balt||context
    It is reasonable to ask for a follow-up project/fork to take a different name. Naming your project, e. G., pgbackrest-ng, does not sound too onerous of a requirement and clearly communicates to users that maintainers have changed (see also paperless ng/ngx as good examples of such a change).

    Finding a successor is also not easy nor cheap (in regards to time).

  17. 17. xnorswap||context
    You'll also find plenty of potential malware injectors too, and who would want the responsibility of trying to vet a successor and have to work out the difference?
  18. 18. jeswin||context
    There's no way to know if a new maintainer will live up to whatever standards they've kept to date. Archiving should be the default decision, unless there's formal and elaborate handover.
  19. 19. dschuessler||context
    Because you will attract people who will want to take advantage of the trust these 3.8k stars signal to some people, for example, by means of supply chain attacks.
  20. 20. leoc||context
    The Apache Foundation used to help with this sort of governance problem didn't it? Thugh maybe pgbackrest isn't quite big and official enough to be the kind of software which Apache takes on, and one certainly hears (increasing?) grumbles about Apache's stewardship.
  21. 21. hombre_fatal||context
    Because that rug pulls your users.

    3.8k stars and the name is years of built up trust with you, not with the person you gave it to.

  22. 22. duskdozer||context
    Those people can just as easily fork it and make a new name then. Otherwise you end up with situations where it's actually an entirely new thing under new developers under the same name. Even riskier in the age of the "AI clean rewrite"
  23. 23. bayindirh||context
    Sometimes you want to hang things to your wall, and be done with it.

    I'd personally do the same. I wouldn't want to be bothered by the future maintainers' choices and get feedback/flak for it. It's a well-known and well-respected way to cycle the name with a "-ng" or "-nx" prefix to signal that this is the newer project with a different set of maintainers.

    Being MIT, while is not my favorite license, doesn't give free license to grab and run with things.

    Honestly, in my eyes, 3.8K or 38K stars mean nothing, because Open Source is not about you [0], to begin with.

    [0]: https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...

  24. 24. arbll||context
    A maintainer that is mainly motivated by the 3.8k stars aspect is probably not the person you want. Working on critical OSS software is fun until it's not, especially when you are not paid for that work.
  25. 25. moritzruth||context
    They are not really forbidding the use of the name (unless they have registered a trademark), they probably simply want to avoid confusion.
  26. 26. AndyNemmity||context
    Why is it the responsibility of the person working for free?

    Why is it never the responsibility of the people using it?

    If anyone cares enough they will. People didn’t care enough to pay, so maybe no one cares enough to fork and be the new unpaid custodian

  27. 27. vova_hn2||context
    > I'm sure with a 3.8k stars repo you'll find competent people willing to continue the work.

    Oh yeah, I'm sure you will find lots of competent people. Like Jia Tan, for example. I've heard he is very competent.

  28. 28. colesantiago||context
    > Since Crunchy Data was sold, I have been maintaining pgBackRest and looking for a position that would allow me to continue the work, but so far I have not been successful. Likewise, my efforts to secure sponsorship have also fallen far short of what I need to make the project viable.

    So this was the problem, I thought Snowflake would pick up the sponsorship of this project but since it is a competing database it doesn't really make much sense.

    I really wish many critical OSS projects get the sponsorship they need to continue.

    Otherwise the software industry is in real trouble.

    Forking it just passes the buck onto another maintainer with the same problem, this time without the original creator maintaining it.

  29. 29. wg0||context
    Very simple. Name it to pgbackrest-AI and add the line:

    "AI driven backups with smartest world class models optimizing every byte stored via deep AI analysis."

    With that added, a million dollars is just chimp change. YC alone would be adding them to all the seasons multiple times over summer, winter and monsoon etc.

  30. 30. voidmain0001||context
    Even with sponsorship, it's not always appreciated such as Vercel backing Svelte, Vue, etc. https://www.reddit.com/r/reactjs/comments/1g4lu5p/am_i_seein...
  31. 31. colesantiago||context
    The responses in there are dumb and childish.

    I doubt that they have sponsored an OSS project or made it sustainable.

  32. 32. nijave||context
    Postgres doesn't compete with Snowflake. Snowflake recently announced a Postgres DBaaS offering that integrates with Snowflake (actually has competitive pricing with AWS RDS Postgres)

    They're two non competing verticals. It's a shame Snowflake decided to shrink Crunchy Data's community presence.

  33. 33. fabian2k||context
    I was about to set up Postgres backups with pgbackrest very soon. It looked like the most mature solution for my use case. What I was aiming for was continuous backups to an object storage provider, without a central DB server but the backup tool directly installed on the Postgres server.

    I'll have to look at the alternatives again, I think that was mostly WAL-G and Barman. It looks like Barman doesn't support direct backup to object storage, unfortunately. And I find the WAL-G documentation very confusing. What I'm looking for is WAL streaming and object storage support, to minimize the amount of data that can be lost and so I don't have to run my own backup server.

  34. 34. drcongo||context
    This is exactly what I was setting it up to do this morning. My research came down to this and WAL-G for the same reasons, and I picked pgBackRest over WAL-G because the documentation was clearer.
  35. 35. bobkb||context
    So sad. We have been using this amazing project extensively
  36. 36. hauxir||context
    been using databasus(https://github.com/databasus/databasus) works pretty well so far.
  37. 37. cpursley||context
    Same, was really easy to set up.
  38. 38. zigzag312||context
    This project looks nice, albeit a bit young for a backup tool.

    Did you encounter any issues or limitations?

  39. 39. arend321||context
    I'm also using this project. Easy to configure and operate.

    I am feeling a slight unease using such a recent project for things as important as the database. But the polished interface combined with the easy docker deployment made me use it anyway. Restores need some permission tuning on PostgreSQL but otherwise happy.

    They are very proud of their github star acquisition curve [0], the "blessing" by Anthropic [1]

    But I have yet to verify the Anthropic claim.

    [0] https://www.reddit.com/r/selfhosted/comments/1q94uu9/selfhos... [1] https://www.reddit.com/r/ClaudeAI/comments/1rklvr7/anthropic...

  40. 40. Nelkins||context
    Wow, this is pretty surprising, I was under the impression that this is the leading PG backup/recovery tool.

    Anybody know how WAL-G and Barman compare?

    https://github.com/wal-g/wal-g

    https://github.com/EnterpriseDB/barman

  41. 41. noosphr||context
    >Wow, this is pretty surprising, I was under the impression that this is the leading PG backup/recovery tool.

    https://xkcd.com/2347/

  42. 42. andruby||context
    We've been happy with WAL-E and now WAL-G (successor). The streaming PITR nature of these won over pgbackrest when we did the analysis ~9 years ago.
  43. 43. fabian2k||context
    Are you using WAL archiving? As far as I understand, pgbackrest and Barman can also use direct streaming from the DB (same mechanism as replication), I didn't find any mention of this in the WAL-G documentation.

    With WAL archiving you need to wait for a WAL segment to finish before it's backed up. With streaming backups the deadtime is minimized. At least that's as far as I understand this, I didn't get to try this out in practice yet.

  44. 44. andruby||context
    WAL-G's PITR backups are insurance against data loss through erroneous data manipulations (eg: accidental DELETE/DROP/UPDATE). WAL-G's streaming approach (using pg_receivewal or similar) sends WAL records to backup storage continuously as they're generated, rather than waiting for a full segment to complete.

    On top of that, for availability (and minimizing deadtime), we have 2 replicas using streaming replication. If the lead PG crashes, one of the replicas is promoted to lead (and starts accepting writes), and we "only" lose the writes that haven't been sent over the streaming replication.

    You can fully eliminate that window of data loss with synchronous replication (vs the default asynchronous replication - which we use). The write slowdown (replica network round trip + 2nd write at replica) isn't worth it for us

  45. 45. dwwoelfel||context
    Are you using `walg wal-receive` for streaming? As far as I can tell, that command will wait for the full wal segment before it pushes anything to storage. I don't see any way to stream wal records continuously in wal-g.
  46. 46. zie||context
    I dunno how they compare, but we have been using barman for a long time very happily. We test our backups every night, by restoring from barman into a _nightly DB. which we then give out to users as a training/testing spot, so that we know when it breaks. It hasn't broken in many years now. <3
  47. 47. __s||context
    I'm one of many wal-g maintainers, it's comparable. I've been inactive for past few years, but back in managed postgres game. Hoping to get support for pg17 incremental backups alongside wal-g's existing delta backups where wal-g compares blocks itself. Be sure to use daemon mode

    Sad to see competitor go, I think there's lots of room for improvement here, & C over Golang is particularly nice when postgres wants to run on system without overcommit

  48. 48. freakynit||context
    So sad to see this happening..

    I had just last year prepared a detailed guide for reliable postgre backups to local volume as well as cloud storage, using pgBackRest, for my own projects.. pgBackRest have worked so well for me

    https://github.com/freakynit/postgre-backup-and-restore-guid...

    Thanks to the author for all the time and effort he put into this project..

  49. 49. 2ndorderthought||context
    I really wish projects like this didn't fall through the cracks and continued to be funded. The struggles of OSS are too real.
  50. 50. freakynit||context
    True.. I truly wish wish we had better open-source license and more open-source projects adopt it..

    Tiered pricing license... tiering based upon annual company revenues... should start super low for small companies (free for individuals), and jump to thousands of dollars per year for 10+ milion revenue companies.

    I understand that this might not fully be in the spirit of open-source, but, what's happening currently is way worse.. where giant companies rip off the hardwork of open-source software maintainers without compsensating them adequately.

  51. 51. topham||context
    Sigh. Bane of my existence is any service which does this.

    My org theoretically makes hundreds of millions, unfortunately none of that money is ours. So I get forced into a procurement process for anything that costs more than (ridiculously small limit), and get stuck using the worst in class because it's cheaper.

  52. 52. duskdozer||context
    May be inconvenient to you, but the point of licenses like that is that inconvenience to companies that aren't willing to pay for the work.
  53. 53. marcus_holmes||context
    I think the point was that this is a company that is willing to pay for the work, but corporate procurement doesn't work like that.

    If you don't have a discretionary spending limit that will accommodate it, then trying to get OSS through procurement is difficult. Who is providing the support contract? What level of indemnity insurance is the supplier covered by? Can you get a spread of three quotes from competitive providers?

    Not to mention that if the supplier isn't VAT/GST registered, the accounts department can be operationally incapable of accepting an invoice or issuing payment.

    Not malicious, this is best practice for a large organisation that needs to prove that it is not doing fraud. But it does present a huge obstacle to buying from small organisations, startups, and one-person OSS maintainers.

  54. 54. freakynit||context
    Agree. Does solving this itself a good product idea? A company specializing in making these deals happen? Taking on the legal and corporate aspects? Kinda like freelancer platforms work.., but, more corporate forcused?
  55. 55. 2ndorderthought||context
    It would be great if github or someone did something to support licenses like this. So procurement was more like a cloud spend. Companies could put caps on the monthly spend for the projects they use. Organizations should be used to paying for products from individuals just like how they do from megacorporations.
  56. 56. Chris2048||context
    Would a third party 'productising' FOSS be acceptable to the FOSS community?

    for example, adding support, bug fixes, corp-friendly licencing and pricing models, private code/package repos, code/package signing, etc. Providing biz ppl to be available for meetings, legal protection, PII, etc.

    To foster goodwill, they could even send some of the profit back to the original maintainer, ala pikapods: https://news.ycombinator.com/item?id=31312682

  57. 57. 2ndorderthought||context
    I'm not suggesting productizing but if someone skimmed 0.5-5% off of some of my packages licenses and gave me the rest without me having to do anything I would be happy with that. I think the important thing would be, customers would likely expect less support so licenses should be cheaper.

    People who don't want tiered licenses could definitely just mit it and walk away of course.

    I do like the idea of paying back the original maintainers otherwise people could sandbag projects to fork them later.

  58. 58. Chris2048||context
    > skimmed 0.5-5% off of some of my packages licenses

    What do you mean by this? A FOSS product that has a paid packaged version?

  59. 59. marcus_holmes||context
    So... Spotify but for OSS?

    I'm not sure this worked out as well as we thought it might do for the musicians.

  60. 60. jumpconc||context
    Sounds like whoever is getting that money is hamstringing your organization on purpose so they can keep more of your money.
  61. 61. 8organicbits||context
    Is there a measurement that would work better for your organizations setup?
  62. 62. spockz||context
    If none of the money is yours it means it is not your profit. A license expressed in terms of profit instead of revenue would be suitable for you.

    I thought a while back there were some products that had dual licenses, a fairly open license for private use, use in small companies, but requiring purchase and/or contribution back when used in something like a cloud providers SaaS.

    I like open source, but I also can understand the nagging feeling when your (and your contributors work) is used for pure corporate greed.

  63. 63. hoistbypetard||context
    > If none of the money is yours it means it is not your profit. A license expressed in terms of profit instead of revenue would be suitable for you.

    I like this idea, but the devil is in the details. "profit" is less defined than revenue. You have to specify your accounting principles. What counts as an expense that deducts from revenue to help define profit?

    It's not impossible, but there's a lot more variance depending on locality, business structure, etc. than there is with just "revenue".

    Of course, I suspect it all comes down to whether the entity offering the license is large enough and well-enough legally armed to force an audit of the organization taking the license. If they're not able to do that, it's all self-reporting anyway.

  64. 64. vladvasiliu||context
    And even if everything is "legit", plenty of corporations make close to no profit because they're "licensing" or paying whatever other fees to a different company that magically happen to track whatever cash they have on hand at the end of the year.

    See all these multinationals paying close to no taxes in the countries where they operate.

  65. 65. spockz||context
    So. If we fix that loophole we both get proper tax revenue and we get to fund OSS better. I say win-win. Although it will be hard to implement in practice.
  66. 66. Chris2048||context
    > If none of the money is yours it means it is not your profit

    Maybe they mean their org makes a lot of money the money for their parent corp, but little of that ( goes into / is reflected in ) their own orgs budget?

  67. 67. lelanthran||context
    > Tiered pricing license... tiering based upon annual company revenues... should start super low for small companies (free for individuals), and jump to thousands of dollars per year for 10+ milion revenue companies.

    Too complicated. Make it GPL (not MIT) and offer dual licensing.

    Those corps that need it but are GPL-phobic can have a different license, and can pay for it.

  68. 68. didgetmaster||context
    The project is being abandoned because the maintainer is tired of working for free. They said that they hoped someone would fork it, change the name, and pick up where it was left off.

    Why would anyone do that? If the person who was most passionate about it for over a dozen years has given up because it was never worth the trouble; what fool would think things will be different going forward?

    This is the curse of OSS.

  69. 69. tclancy||context
    While I tend to agree with the line of thinking in this thread that the ethos of open source (and the web writ large) have been taken advantage of by capitalism, I can't quite see this: things belong to a time and place in one's life. The creator feels like his time with this project is at an end, but why would that be an impediment to someone who needs a package like this stepping up and maintaining it? Better to do that than build a replacement from scratch (most likely). And more likely to attract new sponsorship by being a reliable steward of a known name (albeit with a suffix or something).
  70. 70. gjsman-1000||context
    > have been taken advantage of by capitalism

    “And many programmers, they say to me, “The people who hire programmers demand this, this and this. If I don't do those things, I'll starve.” It's literally the word they use. Well, you know, as a waiter, you're not going to starve. So, really, they're in no danger.”

    - Richard Stallman in 2001 admitting his ideology can’t explain how a programmer can eat

    In my opinion, though this is HN heresy, the free software ideology and ethos was naïve, utopian, and clueless about how power works, from day 1. His dream is literally structurally impossible, capitalism or no capitalism, so long as humans need money to eat.

  71. 71. pdimitar||context
    What is RMS quote supposed to prove here? We can always find new work? Is that it? If so -- not so fast. When you have a family, your freedom is severely hampered. Most companies understand this and abuse it.

    And yes the free software ideology is as naive as a puppy. Every serious individual understands this. Most HN-ers are in a fairly specific bubble (income brackets, geo-location, political leanings, upbringing, the whole package); of course to them this is "heresy". This is well-understood. Happily for me and many others around here, karma farming is not the goal so we don't mind getting some gray arrow treatment every now and then.

  72. 72. WickedSmoke||context
    Communism occurs in part whenever a need is met or an economic decision is made without using value tokens. Direct access to resources without money happens every day (e.g. anyone using Linux rather than a proprietary OS, or exercising in a public park rather than a for-profit gym). The only thing keeping other products & services hoarded behind paywalls is devotion to capitalist ideology. It literally is a problem of capitalism. The structure of the world outside of people's brains has nothing to do with it.
  73. 73. jancsika||context
    > and clueless about how power works, from day 1

    September 26th, 1983:

    "Dear Mr. Stallman, it is I, gjsman-1000, a time-traveler sent back to tell you to rethink your upcoming GNU project because you are currently clueless about how power works. Yes, you may be able to code up an impressive prototype compiler and revise it until your fingers bleed. Yes, a decade later some zealous followers may follow your lead and maintain it on the bleeding edge. Yes, two decades later others will perhaps start an open source compiler project to wrest control from your successful compiler that is largely maintained without your direct input. And yes, three decades later your compiler team may even merge in new features and improvements that came from the other compiler. But heed my ominous warning: four decades later I will not be able to remember my original point, for time travel is dangerous business and has adverse effects on short and long term memory."

  74. 74. didgetmaster||context
    It is my experience that most people work hard to 'get ahead' and not to merely survive. Yes, we will work for subsistence wages if no other option exists, but the goal is to thrive.

    Some who are opposed to capitalism seem to think that anyone who wants to trade their talents and hard work for more than the minimum, are exploiting anyone who wants or needs their product.

  75. 75. watwut||context
    I mean, repeated claims about starving programmers I see HN are indeed ridiculously dramatic. They show up in relation to open source, but mostly as arguments why all those highly paid people just must do unethical things, else they will starve.

    I am not even fan of Stallman. I think it is ok to produce close software. But starving argument is just not it.

  76. 76. shevy-java||context
    > what fool would think things will be different going forward?

    > This is the curse of OSS.

    There are examples of failing forks. And there are examples of forks that became better than the original. It is not possible to generalize this into one or the other solely via a curse-of-OSS conclusion. Funding will always be an issue; but funding is not necessarily the main or only criterium as to whether a project fails or succeeds.

  77. 77. cortesoft||context
    An alternative reading is that after 13 years dedicated to a single project, the original author is simply burnt out on it, but a new maintainer can start with fresh passion that will last a number of years.

    Just because someone gets tired of working on something eventually doesn't mean everyone else will immediately feel the same way.

  78. 78. didgetmaster||context
    Did you read the notice on the git hub site? I think he clearly states that he wanted to continue to work on the project, but could not justify it after sources of funding failed to materialize.
  79. 79. cortesoft||context
    Sure, but a new maintainer might have different needs. The original maintainer doesn’t have the time now to do the work for free, since they have to also have a job to pay the bills. A new maintainer might have more free time, at least for a while…
  80. 80. jrochkind1||context
    They said they imagined it would (I read as "might") be forked, and if it were, please don't use their name for it.

    I don't think they are "hoping" someone else will take it, exactly. They're just done with it. That's how I read it, they liked working on it, but it wasn't financially sustainable, the project is now over, and my reading is they are sad about it.

  81. 81. jumpconc||context
    The struggles of living in an economic system while completely rejecting that system and pretending it isn't there.
  82. 82. AndyNemmity||context
    There is no evidence of any of that.

    He was paid to work on it. That stopped, he continued to work on it in the hopes he could find someone who would hire him to work on it.

    That wasn’t true, no one has funded it.

    So due to the economic system he no longer maintains it.

    That’s your economic system at work. No one is pretending it isn’t there, this is the outcome of it

  83. 83. imtringued||context
    That's actually not the problem. The problem is that the conventional funding model for open source does not make sense and nobody has the resources to provide a financial product that actually works, since the projects with a single maintainer are too small of a market to be worth serving for classic financial institutions like banks.

    The business model is as follows: Open source maintenance produces recurring costs (developer salary, infrastructure costs, etc) but these costs are fixed and do not scale with the number of users, only with the development effort. This means the ideal financing structure would be a cost plus system where the maintainer gets paid a salary and the customers (businesses) are spreading the cost among each other so that each business ends up paying less than if they had built or maintained the project in-house.

    The problem here is that the costs are variable and depend on the number of participants and their individual willingness to spend money and how that effects the viability of the project as a whole. Participating businesses need some sort of guarantee that they won't be stuck with all of the costs and that there are other participants who will chip in. At the same time, once there is a sufficient number of participants, the participating businesses don't want to overpay. They may commit to a monthly worst case bill of $5000, but if the total bill is $10000 and there are 100 participating businesses so that each business could only pay $100, said big spender would want the option to lower their spending down to $100 if possible and let others carry more of the financial burden.

    With this sort of arrangement, funding open source software would be rational, since the amount you save by freeloading is insignificant compared to the risk of the project being discontinued due to freeloading.

  84. 84. faangguyindia||context
    One thing people are not taking into account is that many developers now have less time and are working a lot more because AI makes it seem it should be possible to hit those deadlines, etc.

    Also, many programers have spent their entire funds on tokens, so neither are left with extra money nor time.

  85. 85. film42||context
    Acquisitions change priorities and layoffs put the squeeze on people. AI is for sure in the mix there, but open source decay is a result of no room in budgets for anything but maximizing revenue.
  86. 86. DeathArrow||context
    I have recently configured pgbackrest for our app. :(
  87. 87. joshmn||context
    I have a moderately sized 2TB production database I have enjoyed using pgBackRest on, and was—this week—going to set it up on another 8TB database we have.

    What's the next-closest thing? wal-g? barman? databasus? I only get to cosplay as a DBA.

  88. 88. drcongo||context
    I can beat you on the timing - I'd never used pgBackRest before, but started setting it up on a project about 2 hours ago, by the time I'd finished the README had been updated.
  89. 89. hosteur||context
    databasus does not do PITR.
  90. 90. zigzag312||context
    Is that info up-to-date? Their readme states:

      **Backup types**
      
      - **Logical** — Native dump of the database in its engine-specific binary format. Compressed and streamed directly to storage with no intermediate files
      - **Physical** — File-level copy of the entire database cluster. Faster backup and restore for large datasets compared to logical dumps
      - **Incremental** — Physical base backup combined with continuous WAL segment archiving. **Enables Point-in-time recovery (PITR)** — restore to any second between backups. Designed for disaster recovery and near-zero data loss requirements
    
    EDIT: It seem PITR has been added this March (for PostgreSQL)

    https://github.com/databasus/databasus/issues/411

  91. 91. sgarland||context
    I've used barman on somewhat large-ish DBs (30+ TB), and had no complaints with it. I am a DBRE, if that holds any weight.
  92. 92. joshmn||context
    barman seems to cover "Natural disaster" in their docs. Seems good.

    I'll take a look. Thanks!

  93. 93. briffle||context
    We recently moved from Barman to pgBackrest. Our main complaints with barman were that incremental backups utilized hardlinks. Which was great, we could have our 7TB database backed up, and the next day, only 20GB in changes. But, when replicating that data to cloud storage, there is no concept of hardlinks, so now we had to push 14TB to cloud storage. Also, at least last time we looked a while back, file compression was only the WAL files, unless you used the newer barman-cloud-backup tool, which we did not.

    Also, pgBackrest lets you do the majority of the backup from a physical standby, which is VERY nice for removing the load off production.

    None of these seemed like issues, until we looked at pgBarman, and suddenly realized how nice that would be.

  94. 94. sgarland||context
    We just piped the backups through pigz for compression; rapidgzip also exists for parallelized decompression (or any other compression algorithm you’d like to use, of course).
  95. 95. ramraj07||context
    Backing up multi terabyte production postgres databases is not merely cos playing ha ha
  96. 96. zigzag312||context
    pg_probackup seems to be another one.
  97. 97. 3manuek||context
    The "closest" would be using Barman with hook scripts (https://docs.pgbarman.org/release/3.18.0/user_guide/hook_scr...) if you rely on cloud storage for storing backups.

    https://github.com/aiven-open/pghoard seems like a good option too, but I haven’t tested it yet to have a solid opinion.

  98. 98. infinet||context
    Anyone put the standby on ZFS or other filesystems that can take snapshots for backup?
  99. 99. abrookewood||context
    A previous company I was at did this on the primary. It always seemed to work, but no one was really comfortable with it, largely because there wasn't too much ZFS experience at the time and also because the process did not coalesce the database before doing it. I think it's still a valid strategy, but not one I have had time to verify thoroughly.
  100. 100. skibbityboop||context
    Not for PostgreSQL, but for MariaDB we run replicas in FreeBSD jails on a server with lots of ZFS space. The jailed Maria instances just stop every hour (so the DB flushes everything to disk), the host snapshots all of their data volumes, and then starts the jails back up. Within a minute or so they're fully caught up to the primaries again. Gives us months and months of recovery checkpoints.

    It's great because it's a completely clean save from a shutdown state, so when we need a scratch copy of a database it only takes as long as cloning whatever snapshot we want (depending on how far back we need to to), then starting a scratch jail that runs from those clone filesystems. When finished, just shutdown scratch and delete the clones, it's like it never happened.

  101. 101. dijit||context
    Wow! pgbackrest was definitely the premier backup solution for postgres when I last looked at the ecosystem properly.

    It was the only solution that seemed to take restoring and validating as seriously as “taking a backup” which lead to an unfortunate situation with my employer. (details here: https://blog.dijit.sh/that-time-my-manager-spend-1m-on-a-bac...)

    This is really a major loss. :(

  102. 102. nailer||context
    Mentioned this on X but CockroachDB should sponsor this - their audience is Postgres people and open source contributions can be great marketing.
  103. 103. pjmlp||context
    Plenty of comments of "So sad I have been using this".

    How many actually contributed back to keep it going?

  104. 104. LetMeLogin||context
    I am not sure why are you gatekeeping this? People can't comment now that they are sad because of what happened?
  105. 105. pjmlp||context
    Gatekeeping?!?

    Those that paid, or did any kind of contributions upstream are entitled to be sad.

    Others should consider this is what happens to that lego piece in Nebraska, when no one contributes, and everyone uses it.

  106. 106. piva00||context
    That is exactly gatekeeping, no? You are only entitled to feel sad if you contributed effort or financially, otherwise you aren't allowed to feel.

    Why can't others that just used the tool feel sad? It is supposed to be used, it's the whole reason for it to exist; not everyone using it will have technical expertise or money to contribute to it, feeling sad about it when it solved issues for someone is a completely normal response.

  107. 107. AndyNemmity||context
    The reason for something to exist is not to be used. He was paid while doing it, and that pay stopped, and he kept doing it. Now he wishes to stop.

    The reason for something to exist is someone finds joy doing it. Especially when they are unpaid.

    The sadness should be focused on his inability to support himself with a tool that clearly a lot of companies, and people are using and gaining value for.

  108. 108. piva00||context
    The reason for a tool to exist is to be used, even if it's just by a singular person, other projects that aren't tools do definitely fit into the criteria "just for the joy of it" but a tool, by definition, has at least one usage, and building a tool gives someone joy from the tool being useful.

    The sadness doesn't need to be focused anywhere, you can feel sad for more than one thing at a time. People can be sad that a tool they think is great, have relied on, and has been important for their use case is going away while also be sad that such a great tool doesn't get enough support from companies. Both can be true, no need to control what people can or should feel.

  109. 109. electroly||context
    They're right. This is over the top. Your initial post in this thread was sensible (telling the users of Pgbackrest that they should have supported it if they didn't want this to happen, and saying nothing about what emotions are valid to have), but you took it much further here. People should financially support the OSS projects they use, and the lack of such support is why this project is no longer maintained, but claiming people aren't allowed to feel anything about it is just playing a game that isn't helping the cause. We all know this problem, and being sad while having not supported the project isn't a statement that we disagree that the problem exists. It's a big stretch to assume that it is.

    I've never heard of this project before and I still think it's a bummer that a tool people liked and that the maintainer cared about was unable to find backing. I was never going to support it; I just heard of it for the first time today and I don't use it! I'm still sad. We're not robots here. We're fellow developers, and we know it's tough out there.

  110. 110. Aurornis||context
    > Those that paid, or did any kind of contributions upstream are entitled to be sad.

    I didn’t even use pgbackrest but I’m still sad to see this.

    I should have checked the comments first to determine my eligibility to be sad about this issue, before I had feelings that upset the sadness gatekeepers.

  111. 111. jmull||context
    I’d think the lesson here is obvious, but maybe not.

    If you thought this project had value, you could’ve contributed to it. You probably still could.

    Or, if you think its value is worth $0 (to you), maybe it’s not really that sad (to you).

    People are expressing sadness as if there was nothing to be done about it, but, of course, there’s a really straight-forward thing that could’ve been done about it (possibly still could).

  112. 112. FartyMcFarter||context
    The number of maintainers is always smaller than the number of users for any successful project. GitHub displays the number of contributors as 57, I don't know if that's small or not.
  113. 113. victorbjorklund||context
    It's such a strawman to claim that you cannot be sad if something disappears where you have not financially or you work contributed. Someone can say that they are sad that the Notre Dame burned down even if they haven't personally contributed to Notre Dame.
  114. 114. jhardcastle||context
    That comparison is fallacious too, I think.

    Something burning down is a tragedy, beyond anyone's control. It's also possible to love something for its beauty, and be sad that a globally historic monument suffered such an act of god that the irreplaceable art and craftsmanship is gone forever.

    Something closing down, perhaps because there was not enough money to sustain its continued operation, when tens of thousands or hundreds of thousands of people were using it? That's a perfectly appropriate time to remind folks, "if you like free software, consider donating to help sustain the almost full-time effort it takes to keep packages like this alive."

    Op said, "this is sad [because] I've been using this," and the implication is, "I want to keep using this but now I can't because it's gone" and making the connection that "one way to prevent this from happening to other packages you like is to contribute financially."

  115. 115. victorbjorklund||context
    Alright, take a park closing then. Can you be sad about that if you haven't personally raised money to finance the park?
  116. 116. esafak||context
    Yes, I can't finance every park. I can feel sad about people suffering throughout the world without personally supporting them all.

    I am an active open source contributor.

  117. 117. victorbjorklund||context
    I can’t finance every single open source project in the world either (I am also an open source contributor but with very small libs).
  118. 118. SoftTalker||context
    I pay taxes. I pay for every park in my city. I pay for state and national parks too. I rarely/never use them. I have no choice. That makes me sad. I wish I could direct where my personal tax dollars were spent, but that kind of defeats the point of taxes, which are to fund the things that nobody wants to pay for (or are impractical to pay for, individually).
  119. 119. jamespo||context
    Ah the "I would pay for firefox but" fallacy
  120. 120. victorbjorklund||context
    There are parks that are not owned by the govt.