• 1 Post
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle
















  • Also, to add to this: you’re setup sounds almost identical to mine. I have a NAS with multiple TBs of storage and another machine with plenty of CPU and RAM. Using NFS for your docker share is going to be a pain. I “fixed” my pains by also using shares inside my docker-compose files. What I mean by that is specify your share in a volume section:

    volumes:
      media:
        driver: local
        driver_opts:
          type: "nfs"
          o: "addr=192.168.0.0,ro"
          device: ":/mnt/zraid_default/media"
    

    Then mount that volume when the container comes up:

    services:
      ...
      volumes:
            - type: volume
            source: media
            target: /data
            volume:
              nocopy: true
    

    This way, I don’t have to worry as much. I also use local directories for storing all my container info. e.g.: ./container-data:/path/in/container


  • Basically when you make a new group or user, make sure that the NUMBER that it’s using matches whatever you’re using on your export. So for example: if you use groupadd -g 5000 nfsusers just make sure that whenever you make your share on your NAS, you use GID of 5000 no matter what you actually name it. Personally, I make sure the names and GIDs/UIDs are the same across systems for ease of use.


  • I’m 100% sure that your problem is permissions. You need to make sure the permissions match. Personally, I created a group specifically for my NFS shares then when I export them they are mapped to the group. You don’t have to do this, you can use your normal users, you just have to make sure the UID/GID numbers match. They can be named different as long as the numbers match up.