Connecting to multiple databases with PDO

I have several websites, some of which share database tables and some of which have unique tables. My database is getting kind of big, so I figured it’s time to split it into separate databases, if only to help me keep things organized. The problem is that some of my pages have queries that now target multiple tables in different databases. Imagine trying to join a table Database1 with a table in Database2, for example.

Here’s a very simple query I made using my original connection:

 $dsn = "mysql:host=localhost;dbname=geobase;charset=utf8";
 $opt = array(
    PDO::ATTR_ERRMODE            => PDO::ERRMODE_EXCEPTION,
    PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
 );

 $pdo = new PDO($dsn,'root','root', $opt);

$stm = $pdo->prepare("SELECT A.Site, A.URL, A.Title, A.Article
 FROM 1_about A
 WHERE A.Site = 'g1' AND A.URL = 'webranger'");
$stm->execute(array(
 // 'MyURL'=>'%'.$MyURL.'%'
));

while ($row = $stm->fetch())
{
 $Article = $row['Article'];
}

echo $Article;

But when I try doing the same thing with a query that targets two databases (even though I’m just querying a single table), I get the error message Notice: Undefined variable: Content in /Applications/MAMP/htdocs/gx/index.php on line 217

Can anyone tell me what’s wrong with my script below?

Also, if I get it to work, how would I modify it so I can join two tables from two different databases?

Thanks.

try {
  $db1 = new PDO('mysql:dbname=geobase;host=localhost', 'root', 'root');
  $db2 = new PDO('mysql:dbname=geolife;host=localhost', 'root', 'root');
} catch (PDOException $ex) {
  echo 'Connection failed: ' . $ex->getMessage();
}

$stm = $db1->prepare("SELECT A.Site, A.URL, A.Title, A.Article
 FROM 1_about A
 WHERE A.Site = 'g1' AND A.URL = 'webranger'");
while ($row = $stm->fetch())
{
 $Content = $row['Article'];
}

echo $Content;

Looks like you haven’t executed the query, thus you are fetching nothing and thus your variable “Content”, as the error says, is undefined.

Scott

1 Like

Oh, I thought you only had to include the execute line if you were working with variables. Thanks.

The execute is what runs the SQL. The prepare just builds the query so that you can execute it without having to build it every time.

Wow, I thought this was going to take a while to figure out, but it came together rather quickly. This is the script I’m using to query three tables from three different databases both locally and online. The local databases have no username or password (thus the “root, root”), while I changed the online usernames and passwords for this post.

It appears to be working locally and online both.

Thanks for the tips.

switch(PHP_OS)
{
 case 'Linux':
// ONILNE WEBSITE
 $dsn = "mysql:host=localhost;dbname=ruckus_world;charset=utf8";
 $opt = array(
    PDO::ATTR_ERRMODE            => PDO::ERRMODE_EXCEPTION,
    PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
 );

 $pdo = new PDO($dsn,'ruckus_seer', 'starfish123', $opt);

try {
  $db1 = new PDO('mysql:dbname=ruckus_world;host=localhost', 'ruckus_seer', 'starfish123');
  $db2 = new PDO('mysql:dbname=ruckus_life;host=localhost', 'ruckus_seer', 'starfish123');
  $db3 = new PDO('mysql:dbname=ruckus_politix;host=localhost', 'ruckus_seer', 'starfish123');
} catch (PDOException $ex) {
  echo 'Connection failed: ' . $ex->getMessage();
}


 $stm = $pdo->prepare("SELECT *
 FROM (
 SELECT URL, 'gw' AS MySiteID, 'World' AS MySection FROM gw_geog
 UNION ALL
 SELECT Taxon AS URL, 'gz' AS MySiteID, 'Life' AS MySection FROM ruckus_life.gz_life_mammals
 UNION ALL
 SELECT URL, 'px' AS MySiteID, 'Topics' AS MySection FROM ruckus_politix.pox_topics
 ) AS Combined
 WHERE Combined.URL LIKE :MyURL");
 $stm->execute(array(
 'MyURL'=>$MyURL
 ));

 break;



 default:
// LOCAL WEBSITE
 $dsn = "mysql:host=localhost;dbname=xworld;charset=utf8";
 $opt = array(
    PDO::ATTR_ERRMODE            => PDO::ERRMODE_EXCEPTION,
    PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC
 );

 $pdo = new PDO($dsn,'root','root', $opt);

try {
  $db1 = new PDO('mysql:dbname=xworld;host=localhost', 'root', 'root');
  $db2 = new PDO('mysql:dbname=xlife;host=localhost', 'root', 'root');
  $db3 = new PDO('mysql:dbname=xpolitix;host=localhost', 'root', 'root');
} catch (PDOException $ex) {
  echo 'Connection failed: ' . $ex->getMessage();
}


 $stm = $pdo->prepare("SELECT *
 FROM (
 SELECT URL, 'gw' AS MySiteID, 'World' AS MySection FROM gw_geog
 UNION ALL
 SELECT Taxon AS URL, 'gz' AS MySiteID, 'Life' AS MySection FROM xlife.gz_life_mammals
 UNION ALL
 SELECT URL, 'px' AS MySiteID, 'Topics' AS MySection FROM xpolitix.pox_topics
 ) AS Combined
 WHERE Combined.URL LIKE :MyURL");
 $stm->execute(array(
 'MyURL'=>$MyURL
 ));



 break;
}

 while ($row = $stm->fetch())
 {
 $URL = $row['URL']; 
 $MySiteID = $row['MySiteID']; 
 $MySection = $row['MySection']; 
 // print_r($row);
 }

switch($URL)
{
 case '':
 echo 'NADA';
 break;
 default:
 echo 'SUCCESS!';
 break;
}

You don’t appear to be using $db1, $db2, and $db3 at all as the database accesses are all being done via $pdo.

As long as the databases are on the same server and use the same username and password you can access them all from the one query simply by connecting to one of the databases. If one of the databases ever gets moved to a different server then they will no longer be accessible through the same query.

Anyway, why are you using different databases rather than just putting all the tables in the one database - or is the intention to be able to move one or more of them to different servers if required (which would break the query trying to access them all)?

I have several websites/projects with 202 tables, some of them with over 50,000 rows. I don’t know if that’s considered excessive or not, but I thought it might be a good time to reorganize things. I put all the tables relating to geography into a geography database, all the biology tables into a biology database, etc.

I figured one advantage is that there would simply be less clutter in each database - I can see all the tables at once without scrolling down or clicking on links to page 2, etc. Another advantage is that if I make some sort of mistake in one database, it won’t mess up the tables in the other databases.

Also, I’ve read that really huge databases tend to slow things down, though I don’t know how the experts define “huge.” Do you think what I’m doing is overkill?

My biology database in particular will probably grow significantly, adding a few dozen more tables.

I also installed postgreSQL because I read it’s better for hierarchical stuff, but I’m not ready for another learning curve, so I halted that experiment. The funny thing is, I get the impression that individual tables are actually easier to work with in postgreSQL.

There’s just too much information to absorb. :wink:

Really huge databases do tend to slow things down but from the numbers you are quoting you only have small databases.

Thanks for the tip. I’ll wait and see how many hoops I have to jump through in order to make my new scheme work. If it’s more trouble than it’s worth, I might just stick with my original single database (which I haven’t deleted yet).

From what I can see the MySQL limits are quite generous, at least they’ve never been a problem for me.
Limits on Table Column Count and Row Size

There is a hard limit of 4096 columns per table, but the effective maximum may be less for a given table.

Every table (regardless of storage engine) has a maximum row size of 65,535 bytes

Limits on Table Size

Operating System		File-size Limit
Win32 w/ FAT/FAT32		2GB/4GB
Win32 w/ NTFS			2TB (possibly larger)
Linux 2.2-Intel 32-bit		2GB (LFS: 4GB)
Linux 2.4+			(using ext3 file system) 4TB
Solaris 9/10			16TB
OS X w/ HFS+			2TB

Windows users, please note that FAT and VFAT (FAT32) are not considered suitable for production use with MySQL. Use NTFS instead.

The biggest potential problem I can see is developing on a Windows FAT or Linux 32 bit localhost

Yes apart from those you should be able to go to at least 2Tb per table and so should be able to easily get into the Petabyte range within a single database.

Those limits are also due to maximum file sizes and so can be worked around by switching to a setup that allows tables to span multiple files - not sure if mySQL has such a type but other SQL databases would have as some databases need tables with Petabytes of data in the one table.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.