Wednesday, May 01, 2024

Hacking on PostgreSQL is Really Hard

Hacking on PostgreSQL is really hard. I think a lot of people would agree with this statement, not all for the same reasons. Some might point to the character of discourse on the mailing list, others to the shortage of patch reviewers, and others still to the difficulty of getting the attention of a committer, or of feeling like a hostage to some committer's whimsy. All of these are problems, but today I want to focus on the purely technical aspect of the problem: the extreme difficulty of writing reasonably correct patches.

There are tons of examples that I could use to demonstrate this, but it would be unfair to embarrass anyone else, so I'll take an example drawn from my own very recent experience: incremental backup. I consider this a moderately complex feature. It was an ambitious project, but there are certainly much more ambitious things that somebody might want to do, and that some people have done. This project has actually been brewing for several years; an earlier incarnation of the project morphed into the effort to create pg_verifybackup. In 2023, I worked back around to the main event again. I spent several months having on incremental backup in the first half of the year, and then several more months on it in the last quarter, ending with a commit of the feature on December 20, 2023.

On December 21, there were four commits, two by me and two by Tom Lane, fixing defects in that commit. By January 15, there were sixteen more followup commits, of which only two were planned. Two of those were by Tom Lane, one by Michael Paquier, and the rest by me. Pretty much all of these were, at least as I see it, fixing real issues. It wasn't like the mailing list was torturing me to fix stupid things that didn't matter; they were finding a whole bunch of dumb oversights in my work. After the first few weeks, the pace did slow down quite a bit, but the worst was yet to come.

On March 4, I committed a fix for incremental backup in the face of current CREATE DATABASE whatever STRATEGY file_copy operations. This was formally a data-corrupting bug, although it is unlikely that many people would have hit it in practice. On April 5, I committed a fix for a data-corrupting bug that would have hit practically anyone who used incremental backup on relations 1GB or larger in size. On April 19, I committed a fix for an issue that would have made it impossible to restore incremental backups of PostgreSQL instances that made use of user-defined tablespaces. On April 25, I committed code and documentation improvements in response to observations that, if the checksum status of the cluster was changed, you could checksum failures after restoring. These failures wouldn't be real - in reality your data was fine - but it would look frightening.

This is not an exhaustive enumeration of everything I've done to try to stabilize incremental backup. For example, along the way, I broke the buildfarm several times trying to add more tests, since obviously I didn't have sufficiently good tests. If you go through the commit log, you can see my frantic efforts to stabilize the buildfarm before the howling mob of angry hackers descended on me. But the oversights mentioned in the previous paragraph, particularly the middle two, were serious oversights. They didn't indicate any design-level insufficiency, so the fixes were very simple, but the reasonable reader might wonder how such mistakes survived testing. It's not as if I didn't test -- or at least, I didn't think I hadn't tested. I put significant time and energy into both manual testing and the writing of automated test cases. Jakub Wartak also put an enormous amount of time and energy into testing, for which I remain profoundly grateful, and he somehow didn't find those problems, either. How is that even possible?

One possible theory is that I'm not really very good at this whole hacking on PostgreSQL thing, and certainly there are people who are better at it than I am, but I don't think that can be the whole explanation. If it were, you would expect the troubles that I had here to be unusual, and they very much aren't. In fact, some people have had much worse experiences than I have had with this feature, resulting in patches on which someone has spent a lot of time having to be thrown out entirely, or in committed patches representing large amounts of work being reverted, or in serious bugs making it all the way through to release, necessitating after-the-fact stabilization. I remember a case where a serious data-corrupting bug that I introduced wasn't found for something like two years, and that kind of thing isn't uncommon. As far as I can tell, everyone who works on PostgreSQL struggles to write code well enough to live up to the project standards every time they sit down to write a patch, and even the very best hackers still fail at it from time to time, in small ways or sometimes in large ones.

I believe that this is part of what's behind many of the problems that I mentioned in the opening paragraph. For example, suppose you're lucky enough to be a committer. Every time you commit one of your own patches, you're at serious risk of having to drop everything and put a ton of work into fixing everything you did wrong, either as soon as you do the commit, or when the problems are found later, or both. Every time you commit one of somebody else's patches, you're at risk of having to do the same thing, which means you're probably going to be reluctant to commit anything unless you're pretty sure it's pretty good. That means that committing other people's patches is not primarily about the time it takes to type git commit and git push, but about all of the review you do beforehand, and the potential unfunded liability of having to be responsible for it afterward. I haven't talked to other committers about the extent to which this weighs on their decision-making process, but I'd be astonished if it didn't. There's one particular patch I remember committing - I won't mention which one - where I spent weeks and weeks of time reviewing the patch before committing it, and after committing it, I lost most of the next six to nine months fixing things I hadn't caught during review. That is the sort of experience that you can't afford to repeat very often; there just aren't enough months in the year, or years in your working life. I think it was totally worth the pain, in that particular case, but it's definitely not worth that amount of pain for a random patch in which I'm not particularly personally invested.

And that obviously has the effect of limiting the number of people who can get things committed to PostgreSQL. To become a committer, you have to convince people that you're one of the people who can be trusted to give the final sign-off to other people's patches. That requires both technical and diplomatic skill, but the technical skill alone takes thousands of hours to develop. And then, if you want to keep being able to commit significant patches, whether your own or someone else's, you have to continue spending at least hundreds and probably over a thousand hours on it, every year, in order to maintain the necessary skill level. Not everyone is able or willing to do that, which means that the pool of active committers doesn't grow a whole lot: people are added, but people also move on. And that in turn means that the number of promising new contributors who can get enough committer attention to become committers themselves is also quite limited. Existing committers tend to focus their attention on the most promising patches from the most promising developers; other people, to some greater or lesser extent, get frozen out. Even committers can get frozen out, to a degree: if you commit something that turns out to have major problems, you're going to get a fair amount of blowback from other committers who want to spend their time either on their own patches or on the patches of non-committers, not cleaning up after you, and that blowback is likely to make you more reluctant to commit major patches in the future. That's as it should be, but it still has the effect of further restricting the rate at which stuff gets done.

And of course, all of this also affects the tone of the community discourse. Non-committers get frustrated if they can't get the attention of committers. Reviews get frustrated at people who submit low-quality patches, especially if repeated rounds of review don't result in much improvement. Committers get frustrated at the amount of time they spend cleaning up after other people's mistakes, or worse still, their own. I genuinely believe that almost everyone has the intention to be kind and well-mannered and to help others out whenever possible, but the sheer difficulty of the task in which we are engaged puts pressure on everyone. In my case, and I'm probably not alone in this, that pressure extends well beyond working hours. I can't count the the number of times that I've been rude to someone in my family because I turned the buildfarm red and had to spend the afternoon, or the evening, fixing it, or often enough, just reverting my ill-considered changes. I'm not sure how other people experience it, but for me, the worst part of it is the realization that I've been dumb. Had I only done X or tested Y, I could have avoided messing it up, and I didn't do that, or at least not correctly, and now here we are.

Since PostgreSQL is the only open source project in which I've ever been involved, I don't really know to what degree other projects have encountered these problems, or how they've solved them. I would like to see the developer base grow, and the amount that we get done in a release scale, in a way that it currently doesn't. But I have also seen that just committing more stuff with less caution tends to backfire really hard. After 15 years as a PostgreSQL developer, most if it full time, and after 30 years of programming experience, I still can't commit a test case change without a serious risk of having to spend the next several hours, or days, cleaning up the damage. Either programming is intrinsically difficult, and that's just to be expected, or we're doing things that make it harder for ourselves. I suspect it's at least partially the latter, but I don't know.

Your thoughts welcome.

16 comments:

  1. In theory things could be easier, one day, if PG was written in a higher level language, say Rust. I think for a database that just moves the problems to a different level. At this point the PG code is riding too close to the metal to use anything but C.

    The thing I like to remind people is that this project has been around for decades, during which over a hundred people (or whatever COUNT(*) FROM contributors is) have picked through the code trying to find low hanging fruit to contribute. Everything left to do is astonishingly hard; if it weren't it would have been done already.

    In my realm, I have a couple of PG TODO items I work on in the background, things like what should the max usage count be on buffers. They've been on that list since at least 2007. It's really hard to implement some of these features, both research and implementation. Incremental backup certainly qualifies as something on the nastiest end of difficulty, where mistakes turn into corruption.

    ReplyDelete
    Replies
    1. workingjubileeMay 03, 2024 1:25 PM

      Can you perhaps explain how exactly Postgres is "too close to the metal to use anything but C"? I have implemented significant swathes of the Postgres internal APIs that have to do with manipulating raw bytes in Rust. Technically, implementing them in C the way I do in Rust, or indeed the way Postgres does in C, is not allowed, because it involves violating C's strict aliasing rule. Postgres only gets away with this by begging the compiler for permission using compiler flags, which no C compiler is required to acknowledge.

      Meanwhile, Rust simply doesn't have the strict aliasing rule when it comes to bags of bytes.

      C is a far too high-level language to implement a database like Postgres in.

      Delete
  2. It happens to the best of us. I've got a similar amount of experience, across a wide variety of languages, and I still commit stupid stuff some times.

    There's also the old adage that programmers only test the happy path. You really do want a sadistic SOB that wants to hurt you in charge of testing.

    ReplyDelete
  3. I agree with everything you are saying here. It is tough to contribute and commit for PostgreSQL, and many worthy people have given up after sinking months of work into patches, when everybody agrees that the feature would be desirable (index skip scan, for one).

    On the other hand, it is exactly this grueling process that keeps the software quality high and keeps PostgreSQL from succumbing to creeping featuritis. I just wish there were a way to have all that without exhausting contributors and committers.

    I think you are doing a good job, both technically and in your interaction.

    ReplyDelete
  4. Andrey BorodinMay 02, 2024 7:51 AM

    Is Postgres hacking inherently more complex than developing some external tool? AFAIK probackup, backrest and wal-g had incremental backups for many years.
    Making stuff as an extension or standalone tool is several scales of magnitude easier than bringing something to the core.
    The core is crossraod of all extensions and tools and possible states and possible times...

    ReplyDelete
  5. David KoňaříkMay 02, 2024 4:39 PM

    First off, thanks for your work. From my experience, PostgreSQL is very stable and its code is relatively easy to understand, considering its age and size.

    That said, I think we programmers need to put attention into working on our tools and processes. It's easy to waste time on something that could be done easier, and that time is especially valuable when you're dealing with volunteers.
    For example, you mention "turning the buildfarm red" and from what I can tell, the buildfarm only tests commits that made it to master or a stable branch. That's nice, but it makes it a "last line of defence", not what it really should be, a pre-commit check that gives you the confidence to commit. Most sizeable FLOSS projects I contributed to have some kind of workflow for automatically testing patches (PRs/MRs).

    I can't tell you what to do, I'm not even a Postgres contributor. But I think putting effort into process improvements is a good use of time.

    ReplyDelete
    Replies
    1. I'm sure there's room for more and better processes - but just worth briefly pointing out that the buildfarm isn't the only line of defense here. Of course there are tests to run locally before committing, and in addition to the buildfarm there's also something called "cfbot" that automatically tests patches before they are committed. https://wiki.postgresql.org/wiki/Cfbot

      Delete
  6. As a Postgres user, thank you. I remember interview with Richard Hipp praising Postgres developers for amazing work (https://corecursive.com/066-sqlite-with-richard-hipp/). It’s hard.

    ReplyDelete
  7. Thank you for your work, PostgresSQL is an amazing gift to the world, and a brilliant example of the power and potential of open source software development.

    I am not a contributor to PostgreSQL, but do contribute to other projects. One hard fought lesson I have learned is reducing the scope of the changes I am making.

    Looking at your contributor stats from last year: https://rhaas.blogspot.com/2024/01/

    Of all the PostgreSQL committers (excluding non-committers in the list), you have the highest ratio of (lines of code / commit) by a significant margin.

    That could be due to the stuff that you are working on being more complex, or it could be an opportunity to reduce the surface area of your commits, that might result in a smaller testing blast radius.

    ReplyDelete
    Replies
    1. Interesting observation and worth thinking about, although I think last year may have been somewhat of an outlier in that regard.

      Delete
  8. Hey Robert, I really enjoyed this post. It's insightful and you're a good writer.

    I can tell you that the difficulty of making meaningful Postgres contributions isn't new. The platform's been complicated since the very beginning. I wrote much of the storage management software in University Postgres before Jolly and Andrew added SQL in pg95. That's when the project escaped Berkeley to become a genuinely global community of contributors. Lots of hard work, intricate design and "you are not expected to understand this" code, way back in my day.

    That's partly because we weren't mainly writing a database system. We were building a platform for exploring research ideas. It needed to have lots of surface exposed so MSc and PhD students could test out hairball ideas on it. No-overwrite storage? Tuple-level rules or rewrite rules? User-defined types, functions, aggregates? Functional indexing? Access method extensibility? Support magneto-optical disk? Sure, let's do all that stuff!

    Possible that the pain you feel is due to design decisions I helped make back then. If so, regrets! Still: It was a pretty cool framework for all that experimentation. A few of hairball those ideas even became mainstream. And it launched a lot of great careers in the database industry.

    Notwithstanding that difficulty, I'm absolutely gobsmacked (and thrilled) that the project continues to advance, more than thirty-five years after we had a small amount of useful stuff working as members of Stonebraker's research group. You folks continue to deliver an enormously popular open source database system -- fast, powerful, enterprise-grade. I can think of few other software artifacts that are as successful at a comparable age.

    I'm really glad you and the rest of the community continue to push the project forward. I hope you find the work interesting and rewarding, notwithstanding the difficulty.

    Thanks!

    ReplyDelete
    Replies
    1. Thanks, Mike! I don't think I've run across your name before, or if I have I regrettably don't remember, but it's pretty amazing to find out that someone from the early days actually read my blog! And felt moved to leave a comment, no less!

      Delete
  9. PostgreSQL is low quality. I wish we didn't switch from mssql. This comes from an open-source advocate. I don't like windows, don't like microsoft products and I made company I work for run linux on all desktops. However, switching to PostgreSQL for a product with large databases was a mistake. Which is worse - all major shortcomings revealed themselves very late in the process, when we have already migrated databases of half of our customers to PosgreSQL. We are now migrating back to mssql (for the lack of better option). I wish we had chosen mysql or some other oss alternative.

    ReplyDelete
    Replies
    1. If it upset you enough to post an off-topic comment, why not enlighten the rest of the peanut gallery with some details on the problems you ran into?

      Delete
  10. Interesting read, thank you. I am not involved in/with PgSQL (but I appreciate it greatly), but I would say, that I understand your feeling, and I think the lesson is -- programming is hard, and it is even harder because we use on many layers mediocre tools. Even now, I try to comment this post having Google account, I am logged in, yet I cannot use it ("because"). In my current programming I was recently hit by the fact SQLite is not sorting text using Unicode, then that for Nth time scrolling was broken on my page (CSS issue), then that some genius violated IDisposable contract (C#), so I have to write ton of countermeasures, hacks, workarounds for the other issues, plus discover my own bugs and fix them. As the effect I have the feeling I am walking in some molasse, fixing actual bugs is tiresome, but avoiding broken (on purpose) design is really depressing, and with years it is only worse.

    ReplyDelete
    Replies
    1. Yeah, there are tons of bad tools and software out there. I feel lucky to being working on PostgreSQL which overall is fairly good and well-commented code, and working with people that, on the whole, I like. But even with those advantages it's very difficult, and without them it would be worse.

      Delete