Content
Docker, external volumes and Postgres data permissions
Added by Aleksandrs A over 4 years ago
What happened:
- On a local machine (mac) I launched an OP Docker container with mounted folders, as described in the documentation, all fine.
- Then I tried to migrate the data to a remote server and to launch a new container there. So far, with no success.
- On any attempt to move the data folders across machines, the same type of Postgres error pops within the container:
Starting PostgreSQL 9.6 database server:
mainError: Config owner (postgres:102) and data owner (app:1000) do not match,
and config owner is not root ... failed! failed!
The same error goes on if I try to move the data created by a container on a remote machine to my local machine.
There are variations of owner
and data owner
depending on the direction of migration (machine1 to machine2, or machine2 to machine1).
Moreover, the same error suddenly started to pop also on attempts to re-launch a container created via Kitematic.
I tried many possible scenarios, including altering permissions on data folders, but nothing worked so far. Some solutions on the net refer to manipulating with permissions within the container, but this doesn't help if it can't be launched in the first place.
It would be just great to establish what can be done to mitigate such errors. Thanks in advance for any help on this!
Replies (2)
Hello
I also got the same issue but sorted out actually while we transfer the data then the permission is changed so you need to set permissions for that as 102 104 by following command
chown 102:104 <folder/file> like this you need to do for every object ... better leave it
Thanks
So, I came up with a following solution for my case — testing migration from one machine to another.
I switched to a Docker Compose based setup on a target machine with data and db volumes managed by Docker. In such case, indeed, one should follow Docker documentation on moving data volumes. Although, the original docs are somewhat difficult to understand. Here is a quick overview of the steps.
backup
withopdata
andpgdata
volumes mounted to itbackup
and creates zip/tar archives on the source machinerestore
with targetopdata
andpgdata
volumes mounted to itThere is one more thing if you want to test the transfer to a local machine: if the source OP instance was configured to use https, then you wouldn't get pass the login screen and would get an error message. To solve this without altering the OP configuration, just spin-off a reverse proxy with self-signed certificates and route the localhost traffic to the replica OP instance through it.
Some useful resources for the scenario: