Error while mounting remote backup location

Problem Description
I configured backups to a remote valid sftp location and found an error saying something about sshfs and “file exists”.

I don’t know if at least there’s one old backup available and all the following ones failed. Or if nothing has worked yet. What I know is that credentials and permissions are good for freedombox to access the remote server. It uses proftpd with the sftp module. It’s the best way I found to give sftp access without any kind of shell permission nor global filesystem visibility.

Steps to Reproduce

  1. Go to backups page
  2. Look at the second option. It says something like user@remote-host.net:confused:
  3. Nothing appears with the unfoldable arrow. I assume this means no backup has ever been made.
  4. Click on “mount Location”, the eye icon.
  5. Error appears.

Expected Results
I would expect that this was just a warning, or a self-healing error, or that it gave me more information about what to do about it. But of course, my expectation is to enjoy automatic backups.

Actual results

The error message:

Mounting failed: (‘sshfs’, ‘’, 'Traceback (most recent call last):
File “/usr/share/plinth/actions/sshfs”, line 127, in
main()
File “/usr/share/plinth/actions/sshfs”, line 123, in main
subcommand_method(arguments)
File “/usr/share/plinth/actions/sshfs”, line 50, in subcommand_mount
validate_mountpoint(arguments.mountpoint)
File “/usr/share/plinth/actions/sshfs”, line 90, in validate_mountpoint
os.makedirs(mountpoint)
File “/usr/lib/python3.7/os.py”, line 221, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/media/d4d74d8a-aaaa-bbbb-cccc-024207c0b155'
')

Information

  • FreedomBox version: You are running Debian GNU/Linux 10 (buster) and FreedomBox version 21.4.2. FreedomBox is up to date.
  • Hardware: Pioneer home server, olimex olinuxino a20 (bought in 2019)
  • How did you install FreedomBox?: ‘bought pre-installed hardware’

This one is a bit strange. Here is the code:

def validate_mountpoint(mountpoint):
    """Check that the folder is empty, and create it if it doesn't exist"""
    if os.path.exists(mountpoint):
        if _is_mounted(mountpoint):
            raise AlreadyMountedError('Mountpoint %s already mounted' %
                                      mountpoint)
        if os.listdir(mountpoint) or not os.path.isdir(mountpoint):
            raise ValueError('Mountpoint %s is not an empty directory' %
                             mountpoint)
    else:
        os.makedirs(mountpoint)

So os.path.exists(mountpoint) returned False, but os.makedirs(mountpoint) leads to the error that it already exists. I would guess there is some race condition where the folder appeared after the os.path.exists check.

If you try to mount the location again, do you get the same error?

1 Like

Yes, it fails consistently.
Do you think that it can have something to do with proftpd?


EDIT: I tried a couple of things.

First:

ls /media/d4d74d8a-aaaa-bbbb-cccc-024207c0b155
    ls: cannot access '/media/d4d74d8a-aaaa-bbbb-cccc-024207c0b155': Permission denied
    user@freedombox:~$ sudo ls -la /media/d4d74d8a-a511-11eb-aaa8-024207c0b155
    [sudo] password for user: 
    ls: cannot access '/media/d4d74d8a-aaaa-bbbb-cccc-024207c0b155': Transport endpoint is not connected

I search for that error and people suggest to umount and mount against the unit, and so I did (with umount actually).

After that, i “mounted the location” from plinth and: it worked! I see that I have backups since April 24 to May the 17. Freedombox has notifying me of this error for half a month until I logged in. The message could be improved, but it was enough to notice that backups were not (no longer) being made.

Now, how could we better manage the error? It may be the case for openssh’s sftp too, that the folder exists and “the mount point is active” but the underlying connection is broken? My backup server is a home server too, located somewhere else. It can suffer some occasional downtime.

In fstab there are reconnect and max retries options that are useful for this kind of network filesystem scenarios. Could something like this be implemented?

Cheers!