Buffistas Building a Better Board
Do you have problems, concerns or recommendations about the technical side of the Phoenix? Air them here. Compliments also welcome.
To-do list
Last round of code volunteers gave us PMM, Jon [little SQL], Liese [limited].
We need to sift the code over to investigate streamlining page delivery.
There are a couple options -- pooling MySQL connections (requires little code change, and diligent MySQL tuning -- therefore not applicable if we want to change to a non-dedicated host); changing DB backends (to something more sophisticated, like PostGres, which would require changes in the class files, and somewhat limits the hosting choices we'd have, but not severely); fixing MySQL (I wish); redoing the existing SQL, keeping the back end and default tunings.
Personally, I'm pro the Postgres route, because it's transaction oriented, and can do stored procedures and triggers.
But I'm out of pocket until early April, for sure.
This is something that should be resolved one way or another so we know our future. I'm not sure how much cash we have in pocket for hosting, but we can't stay here forever unless we
know
we have to.
We're good through the end of the year, money-wise. (Because people ROCK, have I mentioned that?)
With pooling connections, would that be something we could do up and tweak over here, and have it work when we moved to a cheaper solution? Or would it need continual tuning?
From what I've seen of the code, other than pooling connections, I'm not really seeing a whole lot of major improvement just sitting out there. Obviously, there could be a lot I'm not seeing, but it's fairly straightforward code-wise. We're just (like similar applications) high on transactions, since every part of the page is delivered on demand.
I'm not too opposed to switching backends, though that's probably the most labor-intensive option on the table. Don't know about PostGres specifically, but that doesn't matter (me not knowing it, that is).
Since it seems like our problem is specific to mySQL, switching backends may be the best option.
have it work when we moved to a cheaper solution? Or would it need continual tuning?
Once it's tuned, it should be good. Just that we're tuning MySQL itself, outside of our code, and we may not be able to do that outside of a dedicated solution.
Postgres has other benefits -- the triggers and stuff are something I'm excited by. But my excitement is not the point. It's something I'd planned to do with the code for my own learning benefit, and actually implementing it was not a big priority.
Ah, gotcha. Makes sense.
Yah, triggers are handy. Off I go to investigate. It's BSD license, so no cost issues, yes?
I am still willing to contribute money, food, and any of my non-coding skills to help do whatever work is needed so that we can get back to the features list.
I understand that I may not be able to help. This is not harping, I just thought I'd offer. You programmer board builder folks are close to my heart and I appreciate all the hours you have put in.
One thing to consider. In my experence triggers in a web based application slow things down too much in any case. On the other hand, even if you can't use the trigger, Postgres may offer better stability and fewer bugs - never played with it, so I don't know this.
Can you elaborate, TB? I'd been hoping to put the number incrementing there, since it's taking a couple SQL calls to do it now. Do you think triggers/stored procedures would be slower than two SQL hits?
Well this was only in SQL, Oracle and MYSQL - which are the only web backends I've ever used - but yes. With very few exceptions the triggers themselves constitute an additional hit. (And if you are just doing scalar calculations that don't require any querying, then code will peform that faster than executing calculations in the databases internal scripting language.) And while in theory in doesn't have , in practice trigggers will usually get invoked at times when they really don't need to be - usually harmlessly, but again taking up bandwidth.
All this makes sense. Triggers were never intended as performance enhancers. The orginal intent was to improve database integrity and stability, and mostly that is still the core of them as a feature. They are also sometimes used as an "easy" way to add functionality, and to ensure application indendence from the front end. (There are a lot of arguments that middleware and other forms of n-tiering is a better way to do this - and then you get back to integrity vs. performance arguments again. Bottom line: doing it in code (and n-tier is still doing it in code) gives you better performance. Doiing in it in the database gives you better integrity. Thats why triggers are used mainly for relational integrity - where code can't give you much of a performance advantage anyway, and problems have major consequences and are hard to debug.
However to the extent we use database functionality, Postgres will problably work better. On edit - OK - I gather we are hand coding incrementing. Postgres autoincrement might be faster. But bear in mind that this kind of incrementing still requires an internal database do keep track of the incremented numbers. In short you still get two hits - it is just that one of them is to a table in the data dictionary.
Triggers were never intended as performance enhancers.
Integrity is the issue, really. We're jumping through hoops in code because we have no ability to work with transactions, or do locking.
You did triggers in MySQL? I didn't know they'd implemented that yet.