As wholly mentioned, make sure you’ve created indexes on the columns that you’re using to limit your result set. I’ve got a MySQL database for zipcodes that had a varchar(5) field for the zipcode, and 4 double precision floats for latitude, longitude, and both of those values in radian measure. A direct lookup of 1 record, keyed against the unindexed text column containing zipcode would take about 0.5 seconds with 35000 records. I duplicated the zipcode values as zero-padded, unsigned, medium integers which I indexed, and now the lookup of a single record takes about 0.04 seconds.
As you’ve noticed any decent RDBMS, and MySQL is indeed enterprise strength, will cache queries for faster execution. The first time they’re slower because the database is creating an execution plan, in an attempt to find the quickest method to perform your query. Seemingly insignificant things can have a huge impact on performance here. I recently was running a fairly simple query in Oracle, arguably enterprise strength, that returned about 800 rows from a table containing about 1 million records. The query was taking about 45 seconds. We looked at it in the query analyzer and found some full table scans. After some tweaking we got the query down to about 2 seconds. Morale: don’t blame the database until you do a little experimenting.
Another thing to pay attention to is how you form your select query. Using something like “SELECT *” forces the db to look up all values in a systems object table. Explicitly listing the actual columns you want can improve performance.
Maybe if you post the offending query someone can offer a little advice.