PHP Random Number

Guys … is there any solution to compare the generated random number into a field of a table.
ex. the generated number is 1 … and i have a table called branchlist
branchlist
code branch
1 | 1
2 | 2
3 | 3
… |
10 | 10

and also I have table branch1…
I want to compare the random number 1 to branch 1 …and insert the data into branch1 table …
I hope there’s someone who can help me …TIA

What exactly isn’t clear for you in this task?

  1. Generate number
  2. Select row with branch = $number
  3. Insert data

All the 3 sir … can you help me ?

<?php 
    //database credentials
    $servername = "localhost";
    $username = "username";
    $password = "password";
    $dbname = "myDB";

    // Create connection
    $conn = new mysqli($servername, $username, $password, $dbname);
    // Check connection
    if ($conn->connect_error) {
        die("Connection failed: " . $conn->connect_error);
    } 

    //get random number from 1 to 10
    $number = rand(1, 10);

    //select branch
    $sql = "SELECT branch FROM branchlist WHERE code = '{$number}'";
    $result = $conn->query($sql);

    if ($result->num_rows == 0) { die('Nothing found, bad random number'); }

    $found_row = $result->fetch_assoc();

    //generate table name
    $table = 'branch' . $found_row['branch'];

    //modify query to insert what you want
    $sql = "INSERT .... INTO {$table}";
    $conn->query($sql);

    $conn->close();

Also you can make MySQL return random row for you instead of generating random number by yourself. That will guarantee you’ll never get a “bad” random number. To do this just modify first query:

$sql = "SELECT branch FROM branchlist ORDER BY RAND() LIMIT 1";

But take in note that approach can be slow if your branchlist table is too big.

Another (and possibly better) method would be to use the above SELECT query but without the ORDER BY or LIMIT clauses and use the array_rand() php function to pick a random branch from the result set (once the result set is loaded into an array)

that method has the disadvantage of having to read in the entire table when only one record is needed.

From tests I have done the ORDER BY RAND LIMIT 1 is reasonably efficient for up to about 750 rows. Above that it is more efficient to do two calls - the first to count the rows and the after getting a random number between 1 and that count the second call simply reads the selected record. Something along the lines of:

$result = $conn->query('SELECT COUNT(*) FROM branchlist');
if ($row = $result->fetch_row()) {
$offset = rand(1, $row[0]);
$result = $conn->query('SELECT branch FROM branchlist LIMIT 1 OFFSET '.$offset);
}

This way you don’t have to randomly sort the content of the table (ORDER BY RAND) or read in the entire table - it is just as efficient with millions of rows in the table as it is with a few hundred.

1 Like

This script is a little one that I’ve put together to test 4 different methods of getting a random ID from a table (the table used has 238 rows) - to be a fairer test it probably needs to be done 10 times for each method and the average time displayed:

<?php

/* Connect to an ODBC database using driver invocation */
$dsn = 'mysql:dbname=universal_empires;host=localhost';
$user = 'universal_empire';
$password = 'password';

try {
    $conn = new PDO($dsn, $user, $password);
} catch (PDOException $e) {
    echo 'Connection failed: ' . $e->getMessage();
}
echo '<p>Method 1</p>';
$time_start = microtime(true);

$stmt = $conn->prepare('SELECT COUNT(*) FROM ue_user_login_log');
$stmt->execute();
$result = $stmt->fetch();

$offset = rand(1, $result[0]);

$stmt = $conn->prepare('SELECT id FROM ue_user_login_log LIMIT 1 OFFSET '.$offset);
$stmt->execute();
$id = $stmt->fetch();

var_dump($id[0]);

$time_end = microtime(true);
$time = $time_end - $time_start;

echo "Method 1 took $time seconds to run\n";

//Reset timer
$time_start = 0;
$time_end = 0;

echo '<p>Method 2</p>';
$time_start = microtime(true);

$stmt = $conn->prepare('SELECT id FROM ue_user_login_log');
$stmt->execute();
$result = $stmt->fetchAll();

$random_id = array_rand($result, 1);

var_dump($random_id);
//$result = $conn->query('SELECT branch FROM branchlist LIMIT 1 OFFSET '.$offset);

$time_end = microtime(true);
$time = $time_end - $time_start;

echo "Method 2 took $time seconds to run\n";

//Reset timer
$time_start = 0;
$time_end = 0;

echo '<p>Method 3</p>';
$time_start = microtime(true);

$stmt = $conn->prepare('SELECT id FROM ue_user_login_log');
$stmt->execute();
$result = $stmt->fetchAll();
shuffle($result);

var_dump($result[0]['id']);

$time_end = microtime(true);
$time = $time_end - $time_start;

echo "Method 3 took $time seconds to run\n";

echo '<p>Method 4</p>';
$time_start = microtime(true);

$stmt = $conn->prepare('SELECT id FROM ue_user_login_log ORDER BY RAND() LIMIT 1');
$stmt->execute();
$result = $stmt->fetch();

var_dump($result[0]);

$time_end = microtime(true);
$time = $time_end - $time_start;

echo "Method 4 took $time seconds to run\n";
?>

The results from one execution:

Method 1
string ‘151’ (length=3)
Method 1 took 0.0015201568603516 seconds to run

Method 2
int 165
Method 2 took 0.0023508071899414 seconds to run

Method 3
string ‘81’ (length=2)
Method 3 took 0.0025978088378906 seconds to run

Method 4
string ‘108’ (length=3)
Method 4 took 0.001582145690918 seconds to run

@driczdc

How many records are there in the branchlist table?
Will the number of records in the branchlist table stay reasonly static?

How many records do you plan on inserting into the branch1 table at a time?

With so few rows it really doesn’t matter which way you do it. The faster ways will only become noticeably faster when there are a lot more rows than that.

It also depends how many you want. If you want to select say 30 records randomly then ORDER BY RAND can be faster than some of the alternatives even with hundreds of thousands of records.

That would depend on the DB used. If you use MySQL or PostgreSQL (possibly others as well) it is not correct due to when you use an offset it still read all records up to the ones you want to pull. So the more records you have in the table, before your offset the slower it will be.

Example:
Limit 1000000, 1 or LIMIT 1 OFFSET 1000000
Will read 1000001 rows from the table.

A workaround for this issue is if your table is based on a primary numeric key, in that case you can just get the highest key, and then generate a random number between 1 and the highest one. Of course if this is a table where you delete a lot of records it can be difficult making this really random, i.e. if you are missing several thousand records between X and Y etc.

SELECT branch FROM branchlist WHERE id>123456 LIMIT 1

On a side note, this is why pagination based on OFFSET becomes very slow on larger tables, and should be avoided if possible.

okay - but it is still only a linear difference based on half the number of records in the table… Sorting millions of records into random order takes a lot more than just millions of times longer to “sort” than sorting a few hundred records into random order.

A linear difference is effectively no difference at all compared with the difference when 'sorting the entire table rather than just reading.

The sequential read is still more efficient for getting one record from a mySQL database when there are more than about 750 records in total than using ORDER BY RAND.

Yes, sequential read is more effective than ORDER BY RAND, I never argued about that (For now lets keep ORDER BY RAND() out of the picture, since you will never use this in a professional setting).

My point was just that you said using limit with offset was “just as efficient with millions of rows or hundreds of rows” and that is not correct.

The more records there is in the table, the query will become slower and slower.

While the difference between limit with offset and the method I described is minimal with few records in the table, it starts to become measurable on a few ten thousands rows, and from that point on the performance difference just keep growing.

That’s changing the entire topic though as that is exactly what was offered as an alternative. Also ORDER BY RAND is the more efficient method when there is one record needed and only a few hundred records in the table or 30 records needed and only a few tens of thousands of records in the table.

Also assuming that the table has been in use for a while and that the ids therefore form a sparse rather than a dense list your method will not return a random result as the sizes of the gap will be randomly different in size.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.