Nick Edwards | 7 Jul 14:59 2014

sshfs reconnect problems


I'm having a problem with sshfs. I mount with:

sshfs nedward2 <at> /Volumes/HZZ/

The problem arises when my IP address changes, which happens frequently 
as I mainly work on a laptop with wireless connection. I've seen lots of 
other messages relating to sshfs hanging in this case, but my problem is 
somewhat different. As well as hanging, sshfs also attempts to reconnect 
automatically, even though I did *not* specify the "-o recconect" 
option. Unfortunately, my host does not support key based authentication 
or anything other than password authentication, so ssh authentication 
fails. The problem is that it keeps on trying, resulting in hundreds of 
failed ssh connection attempts from my username per hour, which causes 
alarm bells to ring for my sysadmin and my account being suspended. Is 
there some way to stop sshfs from attempting to reconnect? The hanging I 
can deal with, as I can just kill sshfs processes and reconnect by hand, 
but the problem is I don't always notice straight away when my IP 
address has changed, so I need to stop the automatic reconnects.

I recently upgraded my sshfs version, and I don't think I saw such 
problems with the old version.

 <at> c161219 ~$ sshfs --version
SSHFS version 2.5 (OSXFUSE SSHFS 2.5.0)
OSXFUSE library version: FUSE 2.7.3 / OSXFUSE 2.6.4

Many Thanks,

(Continue reading)

andrea biancalana | 2 Jul 18:50 2014

Re: pam_mount + sshfs

il giorno Mon, 16 Jun 2014 17:36:45 +0200  andrea biancalana <andrea.biancalana <at>> ha scritto:

> Hi,
> I'm trying to setup pam_mount + sshfs; I'd like to mount users home directories from a remote server.

It seems pmt-fd0ssh doesn't work fine: so I've put ssh="0" and password_stdin options.

I found known_hosts must be set system wide (/etc/ssh/ssh_known_hosts) and
now remote home-dir are mounted in /homeX mountpoint.

The problem is that it doesn't work if I put /home/%(USER) as local mountpoint.

Any idea?

Thanks, Andrea


<?xml version="1.0" encoding="utf-8" ?>

<debug enable="2" />

<volume fstype="fuse" path="/usr/bin/sshfs#%(USER) <at> xx.xx.xx.xx:" mountpoint="/homeX/%(USER)" 
options="password_stdin,reconnect,nonempty" ssh="0"/>
<logout wait="0" hup="0" term="0" kill="0" />
<mkmountpoint enable="1" remove="true" />
(Continue reading)

golodov | 24 Jun 16:47 2014


Please! Please! Please! Make sshfs to try an automatic reconnect if 
server is down, or was rebooted...
I think it sould be done this way: ping server every N seconds (e.g. 
every 5 seconds), if ping failed 3 times in a row - reconnect.


Best regards,
Stanislav Golodov
Mogilev, Belarus
Product Manager, aheadWorks co.
+375 29 7494602

Open source business process management suite built on Java and Eclipse
Turn processes into business applications with Bonita BPM Community Edition
Quickly connect people, data, and systems into organized workflows
Winner of BOSSIE, CODIE, OW2 and Gartner awards
andrea biancalana | 16 Jun 17:36 2014

pam_mount + sshfs

I'm trying to setup pam_mount + sshfs; I'd like to mount users home directories from a remote server.

It seems there are 2 problems:

1) pam_mount doesn't send the password (through pmt-fd0ssh) to the command "mount.fuse sshfs#..."

2) using ssh keys (or running "mount.fuse sshfs#..." from command line) I can mount the remote home
directory anywhere except in local home directory

There is some bug in sshfs?

Thanks, Andrea

HPCC Systems Open Source Big Data Platform from LexisNexis Risk Solutions
Find What Matters Most in Your Big Data with HPCC Systems
Open Source. Fast. Scalable. Simple. Ideal for Dirty Data.
Leverages Graph Analysis for Fast Processing & Easy Data Exploration
IT Department | 13 May 00:04 2014

Issue mounting SSHFS'd home folder during login

I've been using sshfs controlled by pam_mount to mount user's remote home
folder to the local home folder on diskless terminals.  This was working
fine using the version of ssh and sshfs included in Ubuntu 11.10, however
using the version in 14.04 (sshfs 2.5, OpenSSH 6.6p1) the mount process
locks up.

Activating debug, and testing using the following test pattern it locks up
after printing "executing <ssh> ...."

# Test reproduction #
Here's how I was able to test the situation, eliminating pam_mount and many
other variables: (Caveat, I'm typing these commands into this email by hand
- if there's typos, please correct!)
1. Create a test user (skip if you already have one) - for this procedure
I'll call the username 'test'
2. Create the a user with the same name on a server.
3. Remove the test user's home folder: `rm -r /home/test`
4. Create an empty home folder, thus simulating what pam_mount does
automatically: `mkdir /home/test; chown test:test /home/test`
5. Log in as the test user: `su - test`
6. Mount the user's home folder: `sshfs test <at> myserver:/home/test /home/test
-o workaround=rename,nonempty,allow_other,nodev,nosuid`
7. Observe to see if the mount completes.

The process hangs at step 7, thankfully a CTRL-C cancels it testing this
way.  If pam_mount calls it during login...  that's another story.

Note that after pressing CTL-C to cancel the process, you must execute
`umount /home/test` to test again.

(Continue reading)

Kirk Sefchik | 15 Apr 17:39 2014

<file> file is damaged

When I try to open binary files over SSHFS, I often get an error “<file> is damaged and can’t be opened.
You should move it to the Trash”, with an option to trash or cancel. But if I copy the file from the virtual
drive over to my desktop, it mysteriously works fine. Any idea why this might be happening? 
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
Milton Woods | 9 Apr 08:40 2014

Patch for interrupt handling [SEC=UNCLASSIFIED]

Hi sshfs developers,

I have noticed problems with interrupt handling in sshfs/2.5 and earlier. Even though the "intr" option
causes signals to be sent to request threads in sshfs, some operations become uninterruptible if a remote
host stops responding.

The "reconnect" option can help if there is a communication problem between hosts, but there are
situations where a reconnect will not be triggered, such as when the remote filesystem is itself mounted
over a network (e.g. NFS) from a failed server.

The attached patch file contains a number of code changes (relative to sshfs-fuse-2.5/sshfs.c) that are
intended to make interrupt handling more robust. Although it may have been better to use the low-level
fuse API, I decided to stick with the high-level API so that fewer changes were required.

Please consider the patch for inclusion in the next release (2.6?) of sshfs.

Attachment (sshfs-intr.patch): application/octet-stream, 17 KiB
Put Bad Developers to Shame
Dominate Development with Jenkins Continuous Integration
Continuously Automate Build, Test & Deployment 
Start a new project now. Try Jenkins in the cloud.
Fuse-sshfs mailing list
(Continue reading)

Daniel Hahler | 1 Apr 11:14 2014

sshfs hangs/blocks often after suspend/resume cycle: configurable timeout?


I am using the following commands to mount a remote file system via sshfs:

sshfs -o reconnect -o ro -o allow_other -o user=www-data host:/remote/path1 /mnt/sshfs-ro/local1
sshfs -o reconnect -o ro -o allow_other -o user=www-data host:/remote/path2 /mnt/sshfs-ro/local2

The problem is that after resuming the laptop, accessing /mnt/union/local1 (or /mnt/sshfs-ro/local1)
might block for a long time. That means that a simle `ls` will hang for minutes, and is not interruptable
using Ctrl-C.
This also might happen when changing networks (from wired to wifi).

Is it possible to make sshfs behave better when resuming from suspend or changing networks?

I could not find an option to set/lower a timeout sshfs would use to re-connect to the remote host.

FWIW, I am using unionfs-fuse on top of sshfs, but the blocking appears to result from sshfs.




(Continue reading)

David Raymond | 11 Mar 20:28 2014

writing with wrong permissions

I have been experimenting with using sshfs as a replacement for nfs,
given the lack of client authentication with the latter.  I start
sshfs on the client as root with something like this in the fstab:

root <at> gryphon:/home.gryphon /home.gryphon fuse.sshfs \
      defaults,_netdev,allow_other,default_permissions 0 0

The allow_other allows users other than root to access files in the
mounted file system and the default_permissions enforces server
permissions.  (UIDs and GIDs are the same on server and client.)
This all works, but when I create a file as a non-root user,
for example,

      echo "some stuff" > junk

"junk" ends up with root permissions.  Oops!

Am I missing something?  Is this a bug or a feature?  Or am I
trying to make sshfs do something it wasn't intended to do?

I am using sshfs 2.5-1 and fuse 2.9.3-2 on Arch linux.


Dave Raymond


David J. Raymond
Prof. of Physics
(Continue reading)

Sencer Selcuk | 27 Feb 05:02 2014

Bug with SSFS


I probably found a bug of sshfs, and wanted to report it. First let me
explain the problem & how you can reproduce it:

I mount a remote folder, to my local computer without any options:

    sshfs server:. ~/mnt

And I have only these lines in my .ssh/config file -when I remove these lines
problem is fixed, but I don't want to do this.

    Host *
      ControlMaster auto
      ControlPath /tmp/%r <at> %h:%p
      ControlPersist 60

Now when I go to ~/mnt folder and run an ssh command (which is much faster for
many cases, though not for this example) like this:

    ssh server "ls" > list.txt

it stays without any response forever. I can stop the command issuing Ctrl+C,
but then I cannot even make a simple `ls` in my home directory nor in mnt folder
till I kill sshfs with `kill -9`.

I don't have this problem if I issue the above "ssh server "ls" > list.txt"
command outside the mnt folder, nor I do if I don't redirect the output to a
file. Here is ssh debug output:

(Continue reading)

Luis Perez | 28 Jan 21:27 2014

Unable to lock database file in Cadence


I am having an issue when using SSHFS and try to edit my Cadence files.
When mounting, do I need to specify "edit"mode?

Below is the command line that I currently using:

sshfs perxxx <at> /home/perexxxx/xxxx


WatchGuard Dimension instantly turns raw network data into actionable 
security intelligence. It gives you real-time visual feedback on key
security issues and trends.  Skip the complicated setup - simply import
a virtual appliance and go from zero to informed in seconds.