an “id” column is typically used for a surrogate key, and almost invariably, it is an auto_increment
i have yet to hear a good explanation of why the switch needs to be made to BIGINT when there are 2 to 4 billion numbers available (depending on whether you use UNSIGNED, which you should)
how close are you to reaching even 1 billion rows in the table?
I know that both are normally auto_increment surrogate PKs.
I want to know the process which I must follow, if I ever want to change to BIGINT from INT. I am not yet near haveing exhausted the range of INT but if I ever got close, I don’t want to run out of time learning where in doing so, I exceed the INT range.
Is it possible to change from INT to BIGINT wihtout having to reset each constraint manually?
I found this in the docs but I don’t know what the ‘do your processing’ part should include although it has to be the command to change the col data type to BIG INT.
ALTER TABLE yourtablename DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
... do your processing ...
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE yourtablename ENABLE KEYS;
ALTER TABLE yourtablename DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
ALTER TABLE businesses MODIFY id BIGINT;
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE yourtablename ENABLE KEYS;
which is meant to changethe column id, in table businesses to BIGINT.
If you were able to confirm that, I shall read up on using UNSIGNED. I want the auto_increment to being from 1 but maybe that is erroneous since -1 is just as unique?
All integer types can have an optional (nonstandard) attribute UNSIGNED
I try to use standrd sql wherever possible and so I didn’t specify the signed/unsigned part so it defaults to signed. dat right?
So if I retain the standardisation of sql, by using onyl the positive digits of the permmitted range, I may run outside that range sooner? and so I may need to use BIGINT sooner than anticipated (if at all).