cURL with Twitter API: Sometimes receives no data


I switched from using file_get_contents to cURL for communicating with the Twitter API to fetch the users latest tweets.
cURL is great since it doesn’t produce an error when the fetch fails to return anything but I’m wondering what I could do when cURL doesn’t return anything either.

It seems to retrieve, then every time I refresh my webpage it calls the function (below) to fetch the data again but may not return anything at all and hence I’m left with a blank space where my Tweets should be.

Here’s the function I call to fetch the data. Of course this is called every time I refresh my webpage. Using a caching system is sometimes worse because it will cache the webpages at a time they did not have the Twitter data.

//Gets the users latest Tweets from Twitter.
function get_latest_tweets()
	$options = get_option('website_options');
	$userID = $options['twitter_user'];
	$url = '' . $userID . '.xml?count=3';
	$ch = curl_init();
	curl_setopt ($ch, CURLOPT_URL, $url);
	curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
	curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 5);
	$file_contents = curl_exec($ch);
	$xml = new SimpleXMLElement($file_contents);
	foreach ($xml->status as $status)
		$tweet = $status->text;
		$id = $status->id;
		$date = $status->created_at;
		$tweet = //Format the Tweet
		$formatted_date = date( "M jS g:i a", strtotime($date) );						
		echo "<p class='tweet'>" . $tweet . "<a href=''" . $userID . "/status/" . $id . "'> - " . $formatted_date . "</a></p>";

Is there anything simple I could do about this? I thought of storing the returned Tweets to Array until a certain timeframe passes then refreshing that array but I’m not sure if it’s random that the data is retrieved or it’s Twitter limiting their bandwidth.

Thanks for the help.

You’ll probably want to introduce some sort of error checking in there to make sure the curl operation completed successfully.


$file_contents = curl_exec($ch);
if ($file_contents === false) {
    echo 'Curl error: ' . curl_error($ch);

That will tell you if it’s an issue with your curl transaction, or if it’s an issue with Twitter just not returning any data.

If it’s just an issue with Twitter not returning data, you can use sessions to do a semi-cache.

if ($file_contents != "") {
    $_SESSION['cached_tweets'] = $file_contents;

$xml = new SimpleXMLElement($_SESSION['cached_tweets']);

Or something like that. That way if Twitter returns an empty dataset, you can just use the last good dataset from the $_SESSION var.

I would separate the two operations, fetch and store - and display.

fetch and store: I’d have cron call a script which attempts to grab the data, every minute or two say. On success, format the data so that it is ready to be included in your html page and cache that output in a ready-made plain html text file.


Then have your webpage just

include "tweets.html";

in the correct spot.

IF because of any reason (latency etc) the last cycle of fetch and store did not work, you still have a reliable fall-back showing the previously cached version and your users will be none the wiser.

I’m not sure a cron would work in this situation. I think they’re pulling different timelines depending on the user that’s viewing the page

Oh yes, looks as if you are right there.

As, you say then – do some basic error checking, and then perhaps fall back to displaying a default tweet stream?

That would at least put something on the page.

If not, display some kind of friendly notice and try the fetch again.

Thanks kduv your method works well once the data is received but since it’s a users session it’s not the case for everyone, sometimes it never receives anything - Yet it never produces an error.
I’ll keep looking into it, atleast I know I can use sessions now.

Well, I managed to figure it out so for future reference (anyone who comes across this thread)…
I didn’t realise the API was versioned and that I was using it incorrectly.

The URL should actually have been as follows:

$url = '' . $userID . '.xml?count=3';

Notice the instead of and the version number after it. It doesn’t really matter anyway since xml is has been deprecated. Version 1.1 requires JSON and OAuth.

That’s good to know. I’ve never actually needed to do any Twitter integration so I’m not familiar with their API.