Cool beans, Tom!
Also, way back when, I volunteered in a "don't really know all that much about this stuff, but I want to learn" kind of way, and as I've since gone back to school in CS, I could possibly be a little more useful now.
Do you have problems, concerns or recommendations about the technical side of the Phoenix? Air them here. Compliments also welcome.
Cool beans, Tom!
Also, way back when, I volunteered in a "don't really know all that much about this stuff, but I want to learn" kind of way, and as I've since gone back to school in CS, I could possibly be a little more useful now.
I do have a small request re: the bookmarks list. Is there a way we can do like they do at PF, and add a note indicating why something is bookmarked? Cause I look at my bookmarks list in utter bafflement half the time...
Consuela, that's already on the list.
I've been monitoring the system, and looking at the code, and have some information to report on the issue.
Please share.
Has the board been running a little slow for anyone else today?
A little, Sean, but not bad.
Sorry for the redundancy, then, and thanks!
OK, first of all, I was supposed to set up the mailing lists, but I've been slacking. I apologize for that.
Here's the short summary about the performance:
Basically, every time the code queries the database, it uses a "connectAndQuery" function. This gets called around 3 or 4 times per page view. The first problem is that database connects are extremely expensive. Every time we connect to the database, mysqld forks a copy of itself, which then goes away when the connection is closed. The second problem is that for every query, the entire table gets loaded into memory, and then the table gets written out as HTML. This is especially bad with threadsucks, where the size of a httpd process can grow to over 40 megabytes. A far better approach is to fetch the result one row at a time, print out a line of HTML, and then fetch the next row.
Unfortunately, the connectAndQuery idiom is used everywhere in the code.
A far better approach is to fetch the result one row at a time, print out a line of HTML, and then fetch the next row
How do you think we can avoid the totals being different at the end of the page than they are at the top? Or should we just try and leave all totals or cumulative info until the bottom?
Also -- is the forking shutting down correctly, or are we sitting on connections that never disappear (according to what Rob found in the MySQL code?).
To be perfectly honest, I'm not exactly sure why mysqld is forking. Up to now, I had assumed that mysqld was a single-instance, multithreaded process. That's the way I've seen it work in other places. Maybe it's a configuration issue; I haven't seen anything about it in the mysql manual. Maybe it's just how threads show up in a Linix ps listing.
is the forking shutting down correctly, or are we sitting on connections that never disappear (according to what Rob found in the MySQL code?).
It wasn't that the connections weren't going away, it was that an internal variable that counted the connections was wrong. I haven't looked into that. I don't see any evidence of actual connections that are sticking around.