Symfony Bundle Optimization with Blackfire

Symfony Bundle Optimization with Blackfire

When it comes to your regular workflow it is important to check the performance of your application.

Blackfire proved to be a perfect solution in that case.

The installation process is really easy, just check the official Blackfire install documentation section.

For the last 1.5 years, I've been working as an engineer in Upwork (formerly oDesk) where I am a member of the Enterprise team. For the most part, I'm involved into the Symfony applications development. There, in Upwork, we deal with microservice architecture, and we use phystrix and phystrix-bundle to communicate with our inner API-s. (For more details check the report of our software architect Sep Nasiri from the recent PHP Frameworks Day conference in Kiev, Ukraine.)

At some point, my team discovered there is a need to do a regular check to make sure all the related services are still alive:) Understandably, such health check should be fast. The obvious decision was to create a respective bundle. And here is a story how Blackfire helped to make this process almost twice faster.

I started with adding a service that takes a list of service name => service URL and creates a curl request to check that service is alive.

/**
 * Perform cURL request.
 *
 * @param array $urls    URLs to make request.
 * @param array $options additional cURL options.
 *
 * @return array results from the request.
 */
public function processRequest($urls, $options = array())  
{
    foreach ($urls as $key => $url) {
        $ch = curl_init();
        curl_setopt($ch, CURLOPT_URL, $url);
        curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
        $output = curl_exec($ch);
        $results[$key] = array(
            'info' => curl_getinfo($ch),
            'content' => $output,
        );
        curl_close($ch);
    }

    return $results;
}

While running the Blackfire profiling, I discovered that, with regular curl request, checking all the services which my app uses takes 2.19 seconds in our development environment. That is quite a lot! And almost all of that time is being consumed by the curl function itself.

Btw, here you can find a complete description of the Blackfire interface, and I must add they have really well-thought-out documentation.

I did an investigation and discovered that it is possible to run curl in multithread. Then I changed our processRequest method respectively:

/**
 * Perform parallel cURL request.
 *
 * @param array $urls    URLs to make request.
 * @param array $options additional cURL options.
 *
 * @return array results from the request.
 */
public function processAsyncRequest($urls, $options = array())  
{
    $results = array();
    $channels = array();
    $multiHandler = curl_multi_init();

    // Init curl threads
    foreach ($urls as $serviceName => $url) {
        $channels[$serviceName] = curl_init();
        if ($options) {
            curl_setopt_array($channels[$serviceName], $options);
        }
        curl_setopt_array(
            $channels[$serviceName],
            array(CURLOPT_RETURNTRANSFER => true)
        );
        curl_setopt($channels[$serviceName], CURLOPT_URL, $url);
        curl_multi_add_handle($multiHandler, $channels[$serviceName]);
    }

    // Process curl requests
    do {
        $subConnections = curl_multi_exec($multiHandler, $running);
    } while ($subConnections == CURLM_CALL_MULTI_PERFORM || $running > 0);

    // Parse the responses
    foreach ($channels as $serviceName => $url) {
        $results[$serviceName] = array(
            'info' => curl_getinfo($url),
            'content' => curl_multi_getcontent($url),
        );

        curl_multi_remove_handle($multiHandler, $url);
    }
    curl_multi_close($multiHandler);

    return $results;
}

At this point, with Blackfire, we already got quite satisfactory results, but the fact is the performance might be even better!

Curl requests after moving to async

As you can see now, the CPU usage increased threefold compared with the first run. The main reason for such enormous increase is the numerously repeated curl_multi_exec process which alone brings the statistics to the impressive number - 47501 (!!!). What we can do to decrease that enormous number of launches caused by the network latency is to put our script to sleep. My further experiments proved the optimum duration of such sleep to be 5000 microseconds.

// Process curl requests
do {  
    $subConnections = curl_multi_exec($multiHandler, $running);
    usleep(5000); // stop wasting CPU cycles and rest for a couple ms
} while ($subConnections == CURLM_CALL_MULTI_PERFORM || $running > 0);

What do we get now:

And here are the final statistics:

Final Blackfire stats Most positions demonstrate significant improvements, and I believe these are the great results, indeed.

Now, I would like once again indicate the Blackfire's advantages which make it a perfect way to get clearly visualized performance metrics of your application. It is both easy to start and easy to use - thanks to the simply outlined instructions and well-developed documentation, in the first. The other major advantage is that it doesn't require any extra efforts - such as the formal permit to access and/or special coordination with the fellow team members - to start operating Blackfire. Then, it is absolutely great that you can introduce Blackfire at any stage of your product development, not necessarily from the very beginning.