Thanks for commenting.
The target is a small phpbb system (100 users max or so), and it will be a closed system, so the use of Sqlite could work for me I think. But you are right about the filelocking problem, that could surely create problem. And also it only would make sense to use if it has an increased performance. I just compared the two versions, and today they seem to perform equal, or near equal.
Now the problem with the delay is still there, and I am still not sure where the problem lies. Could it be the Filer that is used?
I was told by support that all the servers including the Mysql server use a network attached storage called the Filer.
Now thinking of the situation where all the files must be transmitted over network cables, using the switches. With this I am curious how this system works. What equipment is in use, and how is it all connected? Could it be that some improvents can be made here?
Or is it just that I request the pages from Europe and that those traveling packets are delayed at foreign airport security checks? why the hack is it so slow?
Below is a copy of a post where is described that Sqlite could work well on a single server with reasonable traffic.
Re: [sqlite] SQLite for large bulletin board systems?
D. Richard Hipp
Fri, 27 Aug 2004 14:30:47 -0700
Larry Kubin wrote:
Hello everyone. I am interested in creating a PHP/SQLite powered
bulletin board system similar to phpBB. However, I have read that
SQLite is best suited for applications that are mainly read-only
(because it locks the database during writes). Do you think a SQLite
powered bulletin board is a bad idea? How would I go about handling
the case where two users are trying to write to the database
The CVSTrac system on www.sqlite.org is backed by an SQLite
database (of course). Every single hit does a write to the
database. It gets 20K hits/day from 2K distinct IPs and runs
on the equivalent of a 150MHz machine with no problems at all.
It could easily handle more traffic. On a faster machine, it
could handle lots more traffic.
I've run tests on a workstation where an SQLite-backed website
was handling 10 to 20 hits per second (simulated load).
Use the busy handler on SQLite so that if one thread is writing,
all other threads simply wait their turn.
The trick is not to linger of your writes. Decide what you want
to write into the database, start the transaction, make your
update, and commit. You can make a big change in 10 or 20
milliseconds. What you should avoid doing is starting the
transaction, then doing a bunch of slow computations, then
writing the results and committing. Compute the results first,
before you start the transaction, so that your lock window is small.
If you want to accumulate a lot of results over time and store
them all atomically, write the results initially into a TEMP
table. Then copy the TEMP table contents into the main database
in a single (atomic) operation. Writing to a TEMP table does not
lock the database.
A good rule of thumb is that if your website is small enough
that it can be run off of a single webserver and you do not
need a load-sharing arrangement, then SQLite will probably meet
your needs. If you website traffic gets to be so much that
you are thinking about offloading the database onto a separate
processor or splitting the load between two or more machines,
then you should probably use a client/server database instead.
The best design would be to make the application generic so
that it could use either SQLite or a client/server database.
Then smaller sites could use SQLite and take advantage of
the reduce management and overhead it provides while larger
sites could use a client/server database for scalability.
D. Richard Hipp -- [EMAIL PROTECTED] -- 704.948.4565