Joomla 1.5 performance

apps

#1

I have installed (manually) a Joomla 1.5.0 site. The performance is not good and I was wondering if others have experienced this?

For example, hitting the [Home] button can take 10-15 seconds.

I’ve sent a note to support but I was wondering what others’ experience was.

Thanks in advance…


#2

My joomla! sites, including a couple of “test” Version 1.5 sites, are working pretty well.

It could be your server is pretty heavily loaded (someone using a lot of CPU or memory), and it could also be the result of your own setup (very hvy theme, lots of javascript, heavy extension usage).

If you log into the shell you can check your server load using the “w” command. If you find it to be high (over 5 or so) you should let support know.

Do you have a url you can share so we can see the site “in the wild”?

–rlparker


#3

I’m seeing the same with my new Drupal installation. We turned performance optimizations on and off and haven’t noticed any changes. It’s not “production quality”.

I get a little peeved with shared hosts when performance issues need to be reported by end-users. It always starts with one person saying their site is slow, someone else confirming, a note to support usually ending in “seems fine to us”, more complaints, then finally some internal review that turns up some real issue whether related to servers or networks. Conversely if one site admin misconfigures then they should be able to tell easily how their site is behaving compared to others who share the same and other servers.

We need a standard script that will run a number of queries against a standard database. Everyone gets the same script and the same tiny database. The results should be displayed and sent to a repository in support. The files can get parsed for key data like query time and round-trip browser time and when certain trigger values are found Support should be able to tell that there’s a problem without us having to tell them. Results of queries can be stored in the database so that trends can be established. It should be easy to graph total transaction times, as well as averages of tracert against site admin’s IP address.

The initial script and database can be very simple and this might best be done with open source in the community but with participation from DH techs. It does not need to be rocket science, something that will take months to come to light. I’ll look around for existing script/databases in the public domain but I think all of us would benefit so I welcome collaboration - or information that will prove this idea a complete waste of time. :slight_smile:


#4

I think that is a great idea! You would think that something like that probably already exists in the FOSS world … I’m off to look! :wink:

–rlparker


#5

While i think the push for common QA test cases has merit, the implementation will be difficult. Most of the time performance issues are the result of configuration problems. At least 7 times out of 10 performance problems are caused by something that was done wrong during setup and have nothing to do with the server environment.

I am not trying to trivialize the suggestion to have a common test script, but I wonder how we can do the idea justice by using readily accessible open-source tools. One tool I have been playing with recently is Deja Click. It’s a Firefox plugin that allows you to set up macros. Perhaps there is a way to leverage that tool with something like Firebug to do rudimentary benchmarking.

Clearly I don’t have anything compellingly illuminating to add to this thread, I just wanted to present a couple thoughts publicly.


#6

You may very well be correct about that. I spent about 45 minutes earlier today investigating somethings,and ended up discouraged (form many of the same reasons you mentioned).

I also found that some of the candidate tools I cam across are not the kind of things I would want people pounding a shared server with. :wink:

–rlparker


#7

Thanks for the positive comments. I think this can be made very simple:

  1. The proposed application is a typical server-side package with browser installer.
  2. Create a database called PerformanceTest.
  3. In the installer/configurator point to the server using the same specs as for your other apps. v2 will allow you to point to different servers and select from a list for testing.
  4. The installer creates a couple simple tables.
  5. The test page has a couple basic tests:
    –a) Run multiple queries, and return timing for each.
    –b) A single page refreshes every 5 seconds for 10 tests. Each time it marks the time the page was sent out to the time a refresh comes back in, subtracts 5 seconds of client time, and that’s the network turnaround time.

You can only run 5a against your own server but you can run 5b against anyone’s. This allows different people to hit the same server to determine if the issue is in the network or server, and which system in the server environment may be performing different than others.

An enhancement would be to allow the storing of results for relative comparison and detection of abnormal conditions. To verify that a user is really testing and stop people from tinkering with the numbers, he would need to put a token in his website that can be queried by the data server. So the server gets a request to do a test, checks the token, and if present saves the results. Wildly abnormal results are discarded. Unusually slow hits for many users triggers an email to Support@host.com and WebMaster@domain.com.

This is so simple I can taste it, but I’m no PHP stud and it would take a while for me to write it. To keep this sufficiently cross platform it needs to be entirely server-based and in PHP.

Any better?


#8

My issue has been resolved. There was an issue that was found between the network attached storage (hard disks) and my server. Unfortunately support decided the scope of the problem was not large enough to post it on the status website - which I disagree with. It’s a tough job they have balancing our need to know (and the value of our time diagnosing problems on our sites) with the alternative which would be to post every outage on the status site - the equivalent of yelling ‘fire’ in the crowded theater.

My suggestion was that if they do not wish to generate undue support calls/emails/bad press/etc., that they could post smaller scale outages on our panel when we login to our account.

I think your ideas are great. Ironically ‘information technology’ groups are often the last ones to automate themselves. They should find outages, rather than us. It is a big job, but there is probably some 80/20 rule where a simple system to detect CPU starvation, network issues, and some simple php/mysql queries could be implemented and find most things. As for a way to automate it, there are several UNIX command line programs that can be employed to hit some web pages and do automated testing. Just off the top of my head, there is curl, lynx, wget, ping, etc. These could be batched up in a script and run as a cron’d job every x minutes. Ect, etc., etc. Having said all that, I’m sure the people running the server farm are much more capable than we (or at least I) at coming up with a good scalable solution. I think our bit is to shine a light on the issue we wish to see resolved. Let them come up with a solution that can solve the problem to our satisfaction, and more importantly that they can live with and be happy supporting, etc.

my $.02.

Thanks for your feedback. Hopefully they are listening and watching the forums. :-0