SQL server sharing system, FileMaker, or what?

I was on a conference call last week with a contractor that was pitching his database sharing solution to us. It was basically one of those packages that interact with MS SQL Server with HTML forms like a prettied up version phpmyAdmin.

What I have been doing for the last ten years is running MySQL databases on DH with ODBC links to MS Access forms and tables because this agency already had lots of expertise with Access. I also wrote some php scripts to present some of the data on the web, to screen scrape some data that wasn’t accessible in database form for us from some other agencies, and provided mySQL connection parameters to a regional coordinating agency to keep some of our tables synced with their system.

We already have a server at my location that we were planning to carve up into a couple virtual Windows servers with SQL Server, and expose them to the internet. One of our reasons was to run ArcServer based on SQL Server to share GIS data to our field offices, other agencies, and the public. So we are part way there to having what the potential contractor needs to run on. And some of my database apps could be adapted to run with SQL server calls rather that mySQL calls.

However, this week I got a copy of FileMaker for the first time. This got me wondering if we couldn’t run FileMaker Server on our new hardware, and set up some database solutions and presentations on our own cheaper and easier than the contractors proposal.

What really got me wondering about this contractor’s experience and credibility was his opinion that our tables were too big for mySQL to handle. My attitude was that it all depends on how much processing power we carve out for the servers and the capability of our data connection. That data connection worries me. We just have Charter Business cable to our location, but, on the other hand, the traffic will be limited to our other offices and other agencies interested in salmon recovery in our region. It’s not like I’m trying to run Amazon here. But our tables are too big? Here’s an example: http://ykfp.org/php/BOR/prodbflowtempchart.php This php script makes two calls to a table with 17k records, a call to a table with 52K records and a call to a table with 16k records, call from DH based mySQL tables. That’s about the size of current data sets. Do you see any performance problems?

That’s a rather odd statement; I’d start looking for another contractor. Almost all of DreamHost’s internal operations use MySQL databases, and some of our tables have hundreds of millions of rows — the table sizes you’re talking about are peanuts by comparison. As long as the tables are designed and indexed properly, it works just fine.

Concur with AndrewF. I was reading your with interest until I got to the 17k records part. 17k records, or even 170k records, or even 1,700k records is no problem for even SQLite! MySQL wouldn’t bat an virtual eyelash at those numbers unless you have no indexes and extremely poorly written queries. Your contractor is selling snake oil.