We have been working on a large-ish PostgreSQL database migration to a Google Cloud VM instance (Linux 14.04) for several weeks now, and are having issues with the final step: pg_restore.
We are trying to restore from a 17GB .backup file. We have tried several permutations of the pg_restore command, including:
- No existing target database, include the
--createflag. - Connect to existing target database, include
--cleanand--createflags (per these Stackexchange answers). - Specifying
--host=localhost(per 68 here).
Several attempts resulted in the rows of the restore being written to the nohup.out file.
A few permutations we've tried:
nohup pg_restore --host=localhost --username=postgres --password --role=postgres --dbname=dbname_to_restore path/to/data.backup > nohup.out 2> err.out
pg_restore --username "postgres" --password --role "postgres" --create --clean "/path/to/data.backup"
When we nohup we are getting a few issues in our err.out file, such as
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 187; 1259 17762 TABLE ping arcgis
pg_restore: [archiver (db)] could not execute query: ERROR: type "geography" does not exist
pg_restore: [archiver (db)] could not execute query: ERROR: schema "arcgis" does not exist
What are we missing? What are the bare minimum flags etc. to load the data into the PostgreSQL database? Are there some environmental considerations we need to address? Why isn't the data being loaded into the db?