PHP flush() doesn't work, even with tech support's help

software development

#1

See here for the basic problem others were having:

Basically, <?php flush(); ?> doesn’t work. My specific test script is as follows:
[php]for($i=0; $i<5; $i++) {
echo $i . " ";
flush();
sleep(1);
}[/php]

I saw in their Wiki page about php that the buffer methods such as flush() do not work under normal circumstances, but I can contact DreamHost to have the compression module deactivated.

So, I’ve been in contact with DreamHost, and they disabled mod_deflate (mod_gzip was mentioned in the wiki article, but apparently they haven’t updated that recently, as they have moved on from that).

The fact that it has been disabled can be seen plainly from the headers:
HTTP/1.1 200 OK
Date: Fri, 14 Dec 2012 21:48:07 GMT
Server: Apache
Keep-Alive: timeout=2, max=100
Connection: Keep-Alive
Transfer-Encoding: chunked
Content-Type: text/html

However, that didn’t allow flush() to work. They have additionally tried various things on the server to get it up and running, but nothing has worked so far. Their suggestion was to come here and ask for help.

After asking specifically what they tried, the response was this:

[quote]disable mod_security as it can stop shit from working with flush()
Set zlib.output_compression = 0
Set output_buffering = 0 (which looks like you had already)
echo str_repeat(" ", 1024), “\n”; at the start of the script.
output_buffering = Off (in php config)
output_handler = (in php config)
zlib.output_compression = Off (in php config)
zlib.output_handler = (in php config)[/quote]

I’ve seen in various places suggestions to use apache_*() php method, but they only work if running mod_php, but I am running it as fastcgi. Are there any other changes I can make, or steps I can take to track down this issue?


#2

Have you tried switching the domain from FastCGI to CGI? The FastCGI module can be pretty ornery about buffering.


#3

Well, I’ve tried it now, and it seems to have made no difference. I am also worried about performance if I did leave it in that state.

I guess I should more explicitly define what I’m trying to do. I’m trying to receive a POST from the user, process that POST and return a small amount of data based on it, then continue more intensive processing after the user has received the response. I’ve set up the script to disconnect from the client after doing the initial processing and returning the small data. The only problem is it obviously doesn’t disconnect until the script dies. I’d also like to avoid passing the work off to another process if I can, so that I don’t need to load data multiple times from the DB (some of the initial processing creates objects that are again used in later processing). If it is absolutely necessary, I suppose it is not the END OF THE WORLD (unless I don’t get it done until this Friday…)


#4

register_shutdown_function() is what you want.


#5

I’d pass off to another process (although I probably wouldn’t use PHP).

http://mage.dreamhosters.com/runit/

run caller.php
hit back after you’ve been redirected
wait 15 seconds
hit refresh

caller.php

<?php // code here for client data header("Location: http://google.com.au"); // just pretending to return to data to client $continuation='http://mage.dreamhosters.com/runit/continue.php'; $command='wget -q '.$continuation; @exec("$command> /dev/null &", $arrOutput); ?>

continue.php

<?php // beef this up if ($_SERVER["REMOTE_ADDR"] != $_SERVER["SERVER_ADDR"]) { header("Location: http://mage.dreamhosters.com/runit/"); exit; } set_time_limit(30); // just in case sleep(15); // purely for demonstration purposes $t='The quick brown fox.. you know the rest.'; // process data $f='newfile.txt'; $fh=fopen($f,'w'); fwrite($fh,$t);fclose($fh); ?>

Needs securing, error checking, etc. but you get the idea.


#6

register_shutdown_function() only helps if I’m passing off to another process anyway, since the shutdown function is run as part of the request and therefore still delays the return.

As for the wget option, it’s what I’ve already viewed as my last resort option. It’s nice to have an implementation to start from. I would need to pass off both the GET and POST data for the new process to re-load the data, which is easy enough, although I would probably use curl over wget. However, I assume in either case it’d use double the bandwidth, which I would prefer to also avoid.

However, I’m seeing scant few options, so I may just have to deal with it.


#7

Not if you manage your connections correctly.


#8

Explain what you mean exactly. register_shutdown_function() does nothing but make sure a piece of code ends after exit() or die() or the end of the last line of php code is run. I just tried this code, and it still waits for the delay before returning anything to the client.

[php]function on_shutdown() {
sleep(1);
echo “Hello, world!”;
}

echo “Startup achieved!”;
register_shutdown_function(‘on_shutdown’);
die;[/php]


#9

Yeah, so would I… and then some.

I was chatting with bobocat at the time and tried to get an example down in 10 lines lol

[EDIT]

Re bandwidth: you could try experimenting with nohup’ing the PHP CLI.


#10

Actually, the simplest solution I’ve found so far doesn’t deal with trying to work with a process that only sort of does what I want.

Instead, I’m modifying a handy code snippet I found from Stack Overflow.

[php]function repost_request_async() {
$host = $_SERVER[‘HTTP_HOST’];
$port = $_SERVER[‘SERVER_PORT’];
$path = $_SERVER[‘REQUEST_URI’];
$data = file_get_contents(‘php://input’);

$fp = fsockopen($host,$port,$errno, $errstr, 30);

$out = "POST $path HTTP/1.1\r\n";
$out.= "Host: $host\r\n";
$out.= "Content-Type: application/octet-stream\r\n";
$out.= "Content-Length: ".strlen($data)."\r\n";
$out.= "Connection: Close\r\n\r\n";
$out.= $data;

fwrite($fp, $out);
fclose($fp);

}[/php]


#11

I’m not entirely sure that I understand what you mean.