Jordan Hubbard | 17 May 07:09 2015

Re: Updating makes SMB shares unmodifiable

Thanks for getting back to us, Philip.  Can you answer our question from the previous email, however?

> Can you run smbstatus when it’s read-only and smbstatus when it’s write-only, and attach them both to
a ticket?  Josh’s theory is that you’re actually logged in under two different users in this scenario.

We’re pretty sure this is a configuration problem on your side.

Thanks,

- Jordan

> On May 16, 2015, at 9:48 PM, Philip Robar <philip.robar <at> gmail.com> wrote:
> 
> Thanks for getting back to me on this so quickly—somehow I didn't see your response until today.
> 
> Unfortunately, things have deteriorated for the worse. After adding a new pool to my main server, neither
adding a share via the Wizard nor stopping and starting the CIFS service make shares modifiable. I really
need this machine to be working so I'm going to fresh install the latest 9.3 release, configure it from
scratch and then start tracking the nightlies again.
> 
> The other server, which was also showing the same problem, is still running a recent nightly; but now has
new pools and shares. Its new shares are working as expected after booting from a cold start today. It's
going to be retired soon, but I'll track the nightlies on it for the next days to see if the problem shows up again.
> 
> 
> Phil
> 

_______________________________________________
FreeNAS-testing mailing list
(Continue reading)

Philip Robar | 5 May 08:24 2015
Picon

Updating makes SMB shares unmodifiable

I've run into this issue twice while updating between 9.3 nightlies over the last couple of weeks. Each time after I upgrade my guest access only SMB shares can't be written to even though the shares are exported and mounted as read/write. I've seen this on two different servers with both OS X and Windows clients. No amount of messing around with permissions/acls and ownership seems to fix this. The only work around I've found is to create a new guest access SMB share via the wizard which as a side effect fixes the existing share. (I only have a single pool on each server, so the new dataset and share are created next to the existing dataset and share.)

Phil

<div><div dir="ltr">I've run into this issue twice while updating between 9.3 nightlies over the last couple of weeks. Each time after I upgrade my guest access only SMB shares can't be written to even though the shares are exported and mounted as read/write. I've seen this on two different servers with both OS X and Windows clients. No amount of messing around with permissions/acls and ownership seems to fix this. The only work around I've found is to create a new guest access SMB share via the wizard which as a side effect fixes the existing share. (I only have a single pool on each server, so the new dataset and share are created next to the existing dataset and share.)<div><br></div>
<div>Phil</div>
<div><div><br></div></div>
</div></div>
Jordan Hubbard | 22 Mar 20:40 2015

9.3 Nightlies have changed - New World Order for Trains and Branches

Hi folks,

It probably hasn't escaped anyone's notice that the last couple of weeks have been a little weird in terms of what's been going on with the 9.3-STABLE and 9.3-Nightlies trains.  Bits have been freely commingling back and forth, ChangeLogs have been inexplicably large, or missing, or duplicated, and "Just what the heck is going on with the FreeNAS project??" some have been asking.

The short answer is that as alarming as this may have seemed from the outside, the actual "problems" in terms of getting a working FreeNAS up and running (or maintained) have been very small because all of these bits have been coming from the very same branch, and for all intents and purposes, 9.3-Nightlies and 9.3-STABLE have been very close to identical for the last couple of weeks as well, so all your bits are fine (mangled ChangeLogs notwithstanding) even if you tried to update or install in the middle of all this.  So, what's been up?

Well, as most of you who are familiar with Source Code Management systems know, and we use git / github to manage all the FreeNAS bits, there can be a lot of different branches in a single repository (or spread across multiple repositories, as FreeNAS is), all being worked on at the same time in parallel in order to facilitate feature, experimental, and production lines of development.   It can sometimes get a little hard for even a project's own developers to keep all the changes straight, so keeping the number of such branches low reduces the cognitive overload significantly and also ensures that things don't get "lost" by us failing to merge them, or worse, even mis-merging them across divergent lines of development and accidentally breaking things.

One of the branches we have been using for a long time in the freenas repo is called "master", a semi-standard convention in git which essentially conforms to "trunk" or "head" in other Source Code Management systems, though we did not use this convention uniformly.  Some of our repositories, like ports or trueos, didn't even have a master branch and we worked from other branches (if you're geeky enough to know or care, that information is in freenas/build/repos.sh).   Anyway, we also built the Nightlies from this master branch, "publishing" those nightly builds using the Software Update mechanism to a special train called 9.3-Nightlies and also pushing the builds up to http://download.freenas.org/nightlies/9.3-Nightlies for the benefit of whomever might be interested.  At least, that was the way things were supposed to be.  Somewhere along the way, I personally derped a change and altered the train name to 9.3-CURRENT by mistake for awhile, though the builds still got published on download.freenas.org under the original name, and the Nightlies just sort of went off the rails for awhile without a lot of people really noticing since most people stick to the 9.3-STABLE train, and we in the FreeNAS project do our own personal builds to test changes, for the most part.

Once we noticed this and went to fix it, we had a discussion about what master really meant anymore since we did most of our work in personal branches until it was time to merge, the rate of change had slowed down quite a bit (believe it or not!), and the SU mechanism also meant that what was on the various branches didn't have to be *released* until we decided it was time, so maybe we should just cut down on the branch overhead and just merge to our two stable branches (one for FreeNAS, one for TrueNAS) and cut nightlies from those - the FreeNAS 10 work was already off in its own set of branches, so it didn't need "master" anymore either - it was, for all intents and purposes, a dead branch.

So, to make an already too-long story shorter, I cut the Nightlies over to the 9.3-STABLE branch (but still published under the 9.3-Nightlies TRAIN name, since the two concepts are separate) with the general idea being that they'd still kick off once a night and let people test "what was coming" in that branch without us necessarily declaring it release-worthy until doing a release on the 9.3-STABLE train.  Again, trains and branches are separate.  You can have a 1:1 relationship between a train and a branch, or you can decide to have one branch feed multiple trains, the only difference being a matter of timing.

The problem was, that as much as this was true on a technical level, we had made a lot of *assumptions* about a 1:1 relationship between trains and branches because, up until that point, they were 1:1 as a matter of policy.  So, everything got all "broken" for at least a week while all the assumptions were found and ferreted out, and because some of those changes were reactionary, some needed to be redone.  It was basically all a bit of a clown show for awhile until I managed to figure out just how to make the Release Engineering process actually conform to the New World Order we'd established by retiring the master branch and making 9.3-Nightlies and 9.3-STABLE separate again.

So, what is the new world order?  It's now (finally) pretty simple:

1. 9.3-Nightlies is the *Train* name for builds done off the 9.3-STABLE *Branch* every night at 23:30 PDT.  You can install builds from that train simply by pointing your Software Update selector at it.  Those builds are no longer uploaded to download.freenas.org in the interest of saving space, they're available *only* via the Software Update mechanism, so http://download.freenas.org/nightlies/9.3-Nightlies is *gone* and frankly, you never really needed those builds to begin with.  To jump onto the nightlies, even from scratch, just install the latest 9.3-STABLE build and then switch trains.  Easy.  Read on.

2. 9.3-STABLE is both the *Train* name and the location at http://download.freenas.org/9.3/STABLE for getting the latest "blessed stable bits", e.g. official Software Updates, from the FreeNAS project.  These are still published as fully installable releases (http://download.freenas.org/9.3/latest always pointing at the most recent one) and if you don't like the idea of being a tester, just stay on this Train.

The following bug query is also now even more useful:  https://bugs.freenas.org/projects/freenas/issues?query_id=107

This shows you which bugs have been fixed in the 9.3-STABLE branch but haven't been rolled into a SU yet, so if you jump on the 9.3-Nightlies train you'll get those fixes and only those fixes.  Once those bugs go to a Resolved state, you'll also know those fixes have jumped into the 9.3-STABLE train (again, branch != train) and are going to the wider audience.

Sorry for all the kerfuffle, and thank you for your patience as we made (and recovered from) these infrastructural changes!

The FreeNAS Development Team
<div>
<div data-redactor="1" class="">Hi folks,</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">It probably hasn't escaped anyone's notice that the last couple of weeks have been a little weird in terms of what's been going on with the 9.3-STABLE and 9.3-Nightlies trains. &nbsp;Bits have been freely commingling back and forth, ChangeLogs have been inexplicably large, or missing, or duplicated, and "Just what the heck is going on with the FreeNAS project??" some have been asking.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">The short answer is that as alarming as this may have seemed from the outside, the actual "problems" in terms of getting a working FreeNAS up and running (or maintained) have been very small because all of these bits have been coming from the very same branch, and for all intents and purposes, 9.3-Nightlies and 9.3-STABLE have been very close to identical for the last couple of weeks as well, so all your bits are fine (mangled ChangeLogs notwithstanding) even if you tried to update or install in the middle of all this. &nbsp;So, what's been up?</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">Well, as most of you who are familiar with Source Code Management systems know, and we use git / github to manage all the FreeNAS bits, there can be a lot of different branches in a single repository (or spread across multiple repositories, as FreeNAS is), all being worked on at the same time in parallel in order to facilitate feature, experimental, and production lines of development. &nbsp; It can sometimes get a little hard for even a project's own developers to keep all the changes straight, so keeping the number of such branches low reduces the cognitive overload significantly and also ensures that things don't get "lost" by us failing to merge them, or worse, even mis-merging them across divergent lines of development and accidentally breaking things.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">One of the branches we have been using for a long time in the freenas repo is called "master", a semi-standard convention in git which essentially conforms to "trunk" or "head" in other Source Code Management systems, though we did not use this convention uniformly. &nbsp;Some of our repositories, like ports or trueos, didn't even have a master branch and we worked from other branches (if you're geeky enough to know or care, that information is in freenas/build/repos.sh). &nbsp; Anyway, we also built the Nightlies from this master branch, "publishing" those nightly builds using the Software Update mechanism to a special train called 9.3-Nightlies and also pushing the builds up to <a href="http://download.freenas.org/nightlies/9.3-Nightlies" class="">http://download.freenas.org/nightlies/9.3-Nightlies</a> for the benefit of whomever might be interested. &nbsp;At least, that was the way things were supposed to be. &nbsp;Somewhere along the way, I personally derped a change and altered the train name to 9.3-CURRENT by mistake for awhile, though the builds still got published on <a href="http://download.freenas.org" class="">download.freenas.org</a> under the original name, and the Nightlies just sort of went off the rails for awhile without a lot of people really noticing since most people stick to the 9.3-STABLE train, and we in the FreeNAS project do our own personal builds to test changes, for the most part.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">Once we noticed this and went to fix it, we had a discussion about what master really meant anymore since we did most of our work in personal branches until it was time to merge, the rate of change had slowed down quite a bit (believe it or not!), and the SU mechanism also meant that what was on the various branches didn't have to be *released* until we decided it was time, so maybe we should just cut down on the branch overhead and just merge to our two stable branches (one for FreeNAS, one for TrueNAS) and cut nightlies from those - the FreeNAS 10 work was already off in its own set of branches, so it didn't need "master" anymore either - it was, for all intents and purposes, a dead branch.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">So, to make an already too-long story shorter, I cut the Nightlies over to the 9.3-STABLE branch (but still published under the 9.3-Nightlies TRAIN name, since the two concepts are separate) with the general idea being that they'd still kick off once a night and let people test "what was coming" in that branch without us necessarily declaring it release-worthy until doing a release on the 9.3-STABLE train. &nbsp;Again, trains and branches are separate. &nbsp;You can have a 1:1 relationship between a train and a branch, or you can decide to have one branch feed multiple trains, the only difference being a matter of timing.</div>
<div data-redactor="1" class=""><span class=""><br class=""></span></div>
<div data-redactor="1" class=""><span class="">The problem was, that as much as this was true on a technical level, we had made a lot of *assumptions* about a 1:1 relationship between trains and branches because, up until that point, they were 1:1 as a matter of policy. &nbsp;So, everything got all "broken" for at least a week while all the assumptions were found and ferreted out, and because some of those changes were reactionary, some needed to be redone. &nbsp;It was basically all a bit of a clown show for awhile until I managed to figure out just how to make the Release Engineering process actually conform to the New World Order we'd established by retiring the master branch and making 9.3-Nightlies and 9.3-STABLE separate again.</span></div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">So, what is the new world order? &nbsp;It's now (finally) pretty simple:</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">1. 9.3-Nightlies is the *Train* name for builds done off the 9.3-STABLE *Branch* every night at 23:30 PDT. &nbsp;You can install builds from that train simply by pointing your Software Update selector at it. &nbsp;Those builds are no longer uploaded to <a href="http://download.freenas.org" class="">download.freenas.org</a> in the interest of saving space, they're available *only* via the Software Update mechanism, so <a href="http://download.freenas.org/nightlies/9.3-Nightlies" class="">http://download.freenas.org/nightlies/9.3-Nightlies</a> is *gone* and frankly, you never really needed those builds to begin with. &nbsp;To jump onto the nightlies, even from scratch, just install the latest 9.3-STABLE build and then switch trains. &nbsp;Easy. &nbsp;Read on.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">2. 9.3-STABLE is both the *Train* name and the location at <a href="http://download.freenas.org/9.3/STABLE" class="">http://download.freenas.org/9.3/STABLE</a> for getting the latest "blessed stable bits", e.g. official Software Updates, from the FreeNAS project. &nbsp;These are still published as fully installable releases (<a href="http://download.freenas.org/9.3/latest" class="">http://download.freenas.org/9.3/latest</a> always pointing at the most recent one) and if you don't like the idea of being a tester, just stay on this Train.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">The following bug query is also now even more useful: &nbsp;<a href="https://bugs.freenas.org/projects/freenas/issues?query_id=107" class="">https://bugs.freenas.org/projects/freenas/issues?query_id=107</a>
</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">This shows you which bugs have been fixed in the 9.3-STABLE branch but haven't been rolled into a SU yet, so if you jump on the 9.3-Nightlies train you'll get those fixes and only those fixes. &nbsp;Once those bugs go to a Resolved state, you'll also know those fixes have jumped into the 9.3-STABLE train (again, branch != train) and are going to the wider audience.</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">Sorry for all the kerfuffle, and thank you for your patience as we made (and recovered from) these infrastructural changes!</div>
<div data-redactor="1" class=""><br class=""></div>
<div data-redactor="1" class="">The FreeNAS Development Team</div>
</div>
Philip Robar | 19 Mar 19:07 2015
Picon

Request: Status from iXsystems on Upgrading

I've been following the 9.3 nightlies, which is what the FreeNAS-9.3-Current train on the Update page was delivering. Recently, I noticed that 9.3 nightly upgrades stopped showing up, i.e. no updates were showing up for the current train. Based on poking around the downloads site it looks like 9.3 nightlies have stopped, and new 9.3 Current and Stable builds are being delivered regularly so I switched to the Stable train to see if anything would show up. This got me a new build. Then I switched to Current to see what would happen and I was indeed offered an update, but it was from the Stable release I had installed to a (presumably) newer Stable build.

This all leaves me confused. Current used to be nightly, but now it's not. Current (sometimes?) upgrades one Stable release to another. Stable is being updated more often then one would expect in the FreeBSD world?

Maybe I'm the only or one of the few who want to update on a daily or frequent basis, but it would be nice if iXsystems provided an occasional status report or heads up on what to expect from the trains presented in the GUI to the testing alias.

Phil

<div><div dir="ltr">
<div>I've been following the 9.3 nightlies, which is what the FreeNAS-9.3-Current train on the Update page was delivering. Recently, I noticed that 9.3 nightly upgrades stopped showing up, i.e. no updates were showing up for the current train. Based on poking around the downloads site it looks like 9.3 nightlies have stopped, and new 9.3 Current and Stable builds are being delivered regularly so I switched to the Stable train to see if anything would show up. This got me a new build. Then I switched to Current to see what would happen and I was indeed offered an update, but it was from the Stable release I had installed to a (presumably) newer Stable build.</div>
<div><br></div>
<div>This all leaves me confused. Current used to be nightly, but now it's not. Current (sometimes?) upgrades one Stable release to another. Stable is being updated more often then one would expect in the FreeBSD world?</div>
<div><br></div>
<div>Maybe I'm the only or one of the few who want to update on a daily or frequent basis, but it would be nice if iXsystems provided an occasional status report or heads up on what to expect from the trains presented in the GUI to the testing alias.</div>
<div><br></div>
<div>Phil</div>
<div><br></div>
</div></div>
Celso Rocha | 16 Jan 20:22 2015
Picon

Error importing ZFS volume version 9.2.1.5


Request Method:
POST 

Request URL:

Software Version:
FreeNAS-9.3-STABLE-201501151844 

Exception Type:
OSError 

Exception Value:
[Errno 28] No space left on device: '/var/db/system/cores'
 

Exception Location:
/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py in system_dataset_create, line 5255 

Server time:
Sex, 16 Jan 2015 14:08:42 -0500 


Environment:

Software Version: FreeNAS-9.3-STABLE-201501151844
Request Method: POST


Traceback:
File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
  105.                     response = middleware_method(request, callback, callback_args, callback_kwargs)
File "/usr/local/www/freenasUI/../freenasUI/freeadmin/middleware.py" in process_view
  157.         return login_required(view_func)(request, *view_args, **view_kwargs)
File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
  22.                 return view_func(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in view
  69.             return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in dispatch
  236.         response = super(WizardView, self).dispatch(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in dispatch
  87.         return handler(request, *args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in post
  297.                 return self.render_done(form, **kwargs)
File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in render_done
  350.         done_response = self.done(final_form_list, **kwargs)
File "/usr/local/www/freenasUI/../freenasUI/storage/forms.py" in done
  938.         _n.restart("system_datasets")
File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in restart
  369.         self._simplecmd("restart", what)
File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in _simplecmd
  244.         f()
File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in _restart_system_datasets
  5061.         systemdataset = self.system_dataset_create()
File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in system_dataset_create
  5255.                 os.chmod(corepath, 0775)

Exception Type: OSError at /storage/auto-import/
Exception Value: [Errno 28] No space left on device: '/var/db/system/cores'


--
Celso Rocha - (68) 84287013 - Rio Branco - Acre


<div><div dir="ltr">
<div><br></div>
<div>Request Method:</div>
<div>POST&nbsp;</div>
<div><br></div>
<div>Request URL:</div>
<div>
<a href="http://10.1.6.5/storage/auto-import/">http://10.1.6.5/storage/auto-import/</a>&nbsp;</div>
<div><br></div>
<div>Software Version:</div>
<div>FreeNAS-9.3-STABLE-201501151844&nbsp;</div>
<div><br></div>
<div>Exception Type:</div>
<div>OSError&nbsp;</div>
<div><br></div>
<div>Exception Value:</div>
<div>[Errno 28] No space left on device: '/var/db/system/cores'</div>
<div>&nbsp;</div>
<div><br></div>
<div>Exception Location:</div>
<div>/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py in system_dataset_create, line 5255&nbsp;</div>
<div><br></div>
<div>Server time:</div>
<div>Sex, 16 Jan 2015 14:08:42 -0500&nbsp;</div>
<div><br></div>
<div><br></div>
<div>Environment:</div>
<div><br></div>
<div>Software Version: FreeNAS-9.3-STABLE-201501151844</div>
<div>Request Method: POST</div>
<div>Request URL: <a href="http://10.1.6.5/storage/auto-import/">http://10.1.6.5/storage/auto-import/</a>
</div>
<div><br></div>
<div><br></div>
<div>Traceback:</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response</div>
<div>&nbsp; 105. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; response = middleware_method(request, callback, callback_args, callback_kwargs)</div>
<div>File "/usr/local/www/freenasUI/../freenasUI/freeadmin/middleware.py" in process_view</div>
<div>&nbsp; 157. &nbsp; &nbsp; &nbsp; &nbsp; return login_required(view_func)(request, *view_args, **view_kwargs)</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view</div>
<div>&nbsp; 22. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return view_func(request, *args, **kwargs)</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in view</div>
<div>&nbsp; 69. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return self.dispatch(request, *args, **kwargs)</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in dispatch</div>
<div>&nbsp; 236. &nbsp; &nbsp; &nbsp; &nbsp; response = super(WizardView, self).dispatch(request, *args, **kwargs)</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/views/generic/base.py" in dispatch</div>
<div>&nbsp; 87. &nbsp; &nbsp; &nbsp; &nbsp; return handler(request, *args, **kwargs)</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in post</div>
<div>&nbsp; 297. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; return self.render_done(form, **kwargs)</div>
<div>File "/usr/local/lib/python2.7/site-packages/django/contrib/formtools/wizard/views.py" in render_done</div>
<div>&nbsp; 350. &nbsp; &nbsp; &nbsp; &nbsp; done_response = self.done(final_form_list, **kwargs)</div>
<div>File "/usr/local/www/freenasUI/../freenasUI/storage/forms.py" in done</div>
<div>&nbsp; 938. &nbsp; &nbsp; &nbsp; &nbsp; _n.restart("system_datasets")</div>
<div>File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in restart</div>
<div>&nbsp; 369. &nbsp; &nbsp; &nbsp; &nbsp; self._simplecmd("restart", what)</div>
<div>File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in _simplecmd</div>
<div>&nbsp; 244. &nbsp; &nbsp; &nbsp; &nbsp; f()</div>
<div>File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in _restart_system_datasets</div>
<div>&nbsp; 5061. &nbsp; &nbsp; &nbsp; &nbsp; systemdataset = self.system_dataset_create()</div>
<div>File "/usr/local/www/freenasUI/../freenasUI/middleware/notifier.py" in system_dataset_create</div>
<div>&nbsp; 5255. &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; os.chmod(corepath, 0775)</div>
<div><br></div>
<div>Exception Type: OSError at /storage/auto-import/</div>
<div>Exception Value: [Errno 28] No space left on device: '/var/db/system/cores'</div>
<div><br></div>
<div><br></div>-- <br><div class="gmail_signature">Celso Rocha<span> - (68) 84287013 - Rio Branco - Acre</span><br><br><br>
</div>
</div></div>
Philip Robar | 13 Jan 21:58 2015
Picon

9.3 Current update worked, but says it failed

I just updated from FreeNAS-9.3-Nightlies-201501100401 to FreeNAS-9.3-Nightlies-201501130401

As has been the usual with the 9.3 Current/Nightly train the GUI acted like it was checking for available updates and then told me that there weren't any, but after I clicked on "Check Now" it found something. Is this a know problem?

This particular update sent me email and created an alert that the update failed:

Update failed. Check /data/update.failed for further details.

But when I logged in to my server I found that the update appears to have actually worked based on what is shown in System->Boot tab and System->Information.

Here's /data/update.failed:

ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
ps: empty file: Invalid argument
Running migrations for api:
- Nothing to migrate.
 - Loading initial data for api.
Installed 0 object(s) from 0 fixture(s)
Running migrations for freeadmin:
- Nothing to migrate.
 - Loading initial data for freeadmin.
Installed 0 object(s) from 0 fixture(s)
Running migrations for support:
- Nothing to migrate.
 - Loading initial data for support.
Installed 0 object(s) from 0 fixture(s)
Running migrations for sharing:
- Nothing to migrate.
 - Loading initial data for sharing.
Installed 0 object(s) from 0 fixture(s)
Running migrations for plugins:
- Nothing to migrate.
 - Loading initial data for plugins.
Installed 0 object(s) from 0 fixture(s)
Running migrations for network:
- Nothing to migrate.
 - Loading initial data for network.
Installed 0 object(s) from 0 fixture(s)
Running migrations for account:
- Nothing to migrate.
 - Loading initial data for account.
Installed 0 object(s) from 0 fixture(s)
Running migrations for system:
- Nothing to migrate.
 - Loading initial data for system.
Installed 0 object(s) from 0 fixture(s)
Running migrations for jails:
- Nothing to migrate.
 - Loading initial data for jails.
Installed 0 object(s) from 0 fixture(s)
Running migrations for services:
- Nothing to migrate.
 - Loading initial data for services.
Installed 0 object(s) from 0 fixture(s)
Running migrations for storage:
- Nothing to migrate.
 - Loading initial data for storage.
Installed 0 object(s) from 0 fixture(s)
Running migrations for tasks:
- Nothing to migrate.
 - Loading initial data for tasks.
Installed 0 object(s) from 0 fixture(s)
Running migrations for directoryservice:
- Nothing to migrate.
 - Loading initial data for directoryservice.
Installed 0 object(s) from 0 fixture(s)

<div><div dir="ltr">I just updated from FreeNAS-9.3-Nightlies-201501100401 to FreeNAS-9.3-Nightlies-201501130401<div><br></div>
<div>As has been the usual with the 9.3 Current/Nightly train the GUI acted like it was checking for available updates and then told me that there weren't any, but after I clicked on "Check Now" it found something. Is this a know problem?</div>
<div><br></div>
<div>This particular update sent me email and created an alert that the update failed:</div>
<div><br></div>
<blockquote><div><span>Update failed. Check /data/update.failed for further details.</span></div></blockquote>
<br><div>But when I logged in to my server I found that the update appears to have actually worked based on what is shown in System-&gt;Boot tab and System-&gt;Information.</div>
<div><br></div>
<div>Here's /data/update.failed:</div>
<div><br></div>
<blockquote>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>ps: empty file: Invalid argument<br>Running migrations for api:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for api.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for freeadmin:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for freeadmin.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for support:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for support.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for sharing:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for sharing.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for plugins:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for plugins.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for network:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for network.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for account:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for account.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for system:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for system.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for jails:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for jails.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for services:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for services.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for storage:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for storage.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for tasks:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for tasks.<br>Installed 0 object(s) from 0 fixture(s)<br>Running migrations for directoryservice:<br>- Nothing to migrate.<br>&nbsp;- Loading initial data for directoryservice.<br>Installed 0 object(s) from 0 fixture(s)<br>
</blockquote>
<br>
</div></div>
Philip Robar | 5 Dec 01:35 2014
Picon

Inconsistent GUI reporting of used/free space

I'm running: "FreeNAS 9.3 2014-12-03 01:41:59 GM" nightly.

I have a 1.1 TB zpool with one file system dataset in it:

    space2/backups

which had ~900GB of data in the backups dataset composed of 2 Apple sparse images used for Time Machine backups. space2 had no user data of its own. backups had one snapshot which I used to zfs send/receive it to a different pool. (The dataset was about 450 MB in the copy.) After destroying backups' snapshot and dataset via the GUI the pool still showed roughly the same amount of space being used/free as when the backups dataset still existed. (Note: space2 has never been snapshotted.)

I then tried to destroy the space2 file system dataset and it wouldn't go away. (Should I even be able to do that?) Each time I tried the GUI indicated used space would go down and the free space increased. This is what the GUI showed after two attempts:

                       Used        Available
    space2        355 GiB    800 GiB
        space2    302 GiB    817 GiB

When I choose to destroy the entire pool the "Detach Volume" dialog box said, "You have 821 MiB of used space within this volume."

I canceled the destruction of the pool, switched to another part of the GUI and came back to "Storage - Volumes" and now I see:

                        Used        Available
    space2        821 GiB    1.1 TiB
        space2    818 GiB    1.1 TiB

On the other hand du(1) and df(1) show:

    [root <at> server /mnt/space2] # df -h .
    Filesystem    Size    Used   Avail Capacity  Mounted on
    space2        1.1T    730M    1.1T     0%    /mnt/space2

    [root <at> server /mnt/space2] # du -sh /mnt/space2
    730M    /mnt/space2

Here's a similar situation with non-sparse data:

    space3/Media

Before deleting Media:

                          Used        Available
    space3          444 GB    15 GB
        space3      444 GB    1.5 GB
            Media    444 GB    1.5 GB

After deleting Media (and rebooting):

                        Used        Available
    space3        444 GB    16 GB
        space3    413 GB    32 GB

But, when I choose to destroy the space3 pool: "You have 6.8 MiB of used space within this volume."

    [root <at> server ~]# cd /mnt/space3                                       
    [root <at> server /mnt/space3] # df -h .
    Filesystem    Size    Used   Avail Capacity  Mounted on                 
    space3        445G    188k    445G     0%    /mnt/space3
                     
    [root <at> server /mnt/space3] # du -sh /mnt/space3
    5.0k    /mnt/space3
                                  
It certainly seems like the GUI is not calculating space statics correctly and is not even self consistent.

Phil


<div><div dir="ltr">
<div>I'm running: "FreeNAS 9.3 2014-12-03 01:41:59 GM" nightly.</div>
<div><br></div>I have a 1.1 TB zpool with one file system dataset in it:<div><br></div>
<div>&nbsp; &nbsp; space2/backups</div>
<div><br></div>
<div>which had ~900GB of data in the&nbsp;backups&nbsp;dataset composed of 2 Apple sparse images used for Time Machine backups.&nbsp;space2 had no user&nbsp;data of its own.&nbsp;backups&nbsp;had one snapshot which I used to zfs send/receive it to a different pool. (The dataset was about 450 MB in the&nbsp;copy.) After destroying&nbsp;backups'&nbsp;snapshot and dataset via the GUI the pool still showed roughly the same amount of space being used/free&nbsp;as when the&nbsp;backups&nbsp;dataset still existed. (Note: space2 has never been snapshotted.)</div>
<div><br></div>
<div>I then tried to destroy the space2 file system dataset and it wouldn't go away. (Should I even be able to do that?) Each time I tried the GUI indicated used space would go down and the free space increased. This is what the GUI showed after two attempts:</div>
<div><br></div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Used &nbsp; &nbsp; &nbsp; &nbsp;Available<br>&nbsp; &nbsp; space2 &nbsp; &nbsp; &nbsp; &nbsp;355 GiB &nbsp; &nbsp;800 GiB<br>&nbsp; &nbsp; &nbsp; &nbsp; space2 &nbsp; &nbsp;302 GiB &nbsp; &nbsp;817 GiB<div>
<br><div>When I choose to destroy the entire pool the "Detach Volume" dialog box said, "You have 821 MiB of used space within this volume."</div>
<div><br></div>
<div>I canceled the destruction of the pool, switched to another part of the GUI and came back to "Storage - Volumes" and now I see:</div>
<div><br></div>
<div>
<div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Used &nbsp; &nbsp; &nbsp; &nbsp;Available</div>
<div>&nbsp; &nbsp; space2 &nbsp; &nbsp; &nbsp; &nbsp;821 GiB &nbsp; &nbsp;1.1 TiB</div>
<div>&nbsp; &nbsp; &nbsp; &nbsp;&nbsp;space2&nbsp; &nbsp; 818 GiB &nbsp; &nbsp;1.1 TiB</div>
</div>
<div><br></div>
<div>On the other hand du(1) and df(1) show:</div>
<div><br></div>
<blockquote></blockquote>&nbsp; &nbsp; [root <at> server /mnt/space2] # df -h .<br><blockquote></blockquote>&nbsp; &nbsp; Filesystem &nbsp; &nbsp;Size &nbsp; &nbsp;Used &nbsp; Avail Capacity &nbsp;Mounted on<br><blockquote></blockquote>&nbsp; &nbsp; space2 &nbsp; &nbsp; &nbsp; &nbsp;1.1T &nbsp; &nbsp;730M &nbsp; &nbsp;1.1T &nbsp; &nbsp; 0% &nbsp; &nbsp;/mnt/space2<br><br>&nbsp; &nbsp; [root <at> server /mnt/space2] # du -sh /mnt/space2<br><div>&nbsp; &nbsp; 730M &nbsp; &nbsp;/mnt/space2</div>
<div><br></div>
<div>Here's a similar situation with non-sparse data:</div>
<div><br></div>&nbsp; &nbsp; space3/Media<br><blockquote><div><br></div></blockquote>Before deleting Media:<br><div><br></div>
<div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Used &nbsp; &nbsp; &nbsp; &nbsp;Available</div>&nbsp; &nbsp; space3 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;444 GB &nbsp; &nbsp;15 GB<br><div><div>&nbsp; &nbsp; &nbsp; &nbsp; space3 &nbsp; &nbsp; &nbsp;444 GB &nbsp; &nbsp;1.5 GB</div></div>
<div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Media &nbsp; &nbsp;444 GB &nbsp; &nbsp;1.5 GB</div></div>
<br>After deleting Media (and rebooting):</div>
<div><br></div>
<div>
<div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; Used &nbsp; &nbsp; &nbsp; &nbsp;Available</div>&nbsp; &nbsp; space3 &nbsp; &nbsp; &nbsp; &nbsp;444 GB &nbsp; &nbsp;16 GB</div>
<div>
<div><div>&nbsp; &nbsp; &nbsp; &nbsp; space3 &nbsp; &nbsp;413 GB &nbsp; &nbsp;32 GB</div></div>
<div><br></div>
<div>But, when I choose to destroy the space3 pool:&nbsp;"You have 6.8 MiB of used space within this volume."</div>
<div><br></div>&nbsp; &nbsp; [root <at> server ~]# cd /mnt/space3 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<br>&nbsp; &nbsp; [root <at> server /mnt/space3] # df -h .<br>&nbsp; &nbsp; Filesystem &nbsp; &nbsp;Size &nbsp; &nbsp;Used &nbsp; Avail Capacity &nbsp;Mounted on &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<br>&nbsp; &nbsp; space3 &nbsp; &nbsp; &nbsp; &nbsp;445G &nbsp; &nbsp;188k &nbsp; &nbsp;445G &nbsp; &nbsp; 0% &nbsp; &nbsp;/mnt/space3</div>
<div>&nbsp;&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<br>&nbsp; &nbsp; [root <at> server /mnt/space3] # du -sh /mnt/space3<br>&nbsp; &nbsp; 5.0k &nbsp; &nbsp;/mnt/space3<br>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;&nbsp;<div>It certainly seems like the GUI is not calculating space statics correctly and is not even self consistent.</div>
<div><br></div>
<div>Phil</div>
<div><br></div>
<div><br></div>
</div>
</div></div>
Ulf Panten | 12 Nov 22:42 2014
Picon

Update problem 9.2.1.8 -> 9.3 BETA

Hello,

I tried to upgrade my freenas box from 9.2.1.8 to 9.3 BETA.

Unfortunately, it doesn't find the encrypted volume anymore. If I try to import it, it asks for the
encryption key, which I didn't save. Is there any chance to recover the data on this volume?

I didn't see any instructions to save it before updating in the readme either.

--

-- 
Regards,
  Ulf Panten
Jordan Hubbard | 5 Oct 17:00 2014

Re: SMB is broken in recent FreeNAS 9.3 M4 builds

Thanks for reporting this, Philip!  A fix has been checked in and will be in today's build.

- Jordan 

Sent from my iPad

> On Oct 4, 2014, at 23:32, Philip Robar <philip.robar@...> wrote:
> 
> Starting with build FreeNAS-9.3-M4-4bae40c-x64 SMB sharing has stopped working on my home LAN for both
Windows 8.1 and OS X 10.10 RC1. If I downgrade to build FreeNAS-9.3-M4-4989948-x64 the problem goes away.
The issue seems to center around this error:
> 
> Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234444, 0] ../lib/util/modules.c:48(load_module)
> Oct 5 00:31:28 server smbd[3574]: Error loading module
'/usr/local/lib/shared-modules/vfs/aio_pthread.so': Cannot open "/usr/local/lib/shared-modules/vfs/aio_pthread.so"
> Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234569, 0] ../source3/smbd/vfs.c:184(vfs_init_custom)
> Oct 5 00:31:28 server smbd[3574]: error probing vfs module 'aio_pthread': NT_STATUS_UNSUCCESSFUL
> Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234758, 0]
../source3/smbd/vfs.c:349(smbd_vfs_init) Oct 5 00:31:28 server smbd[3574]: smbd_vfs_init:
vfs_init_custom failed for aio_pthread
> Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234830, 0]
../source3/smbd/service.c:640(make_connection_snum) Oct 5 00:31:28 server smbd[3574]: vfs_init
failed for service Media1
> 
> Is this a known problem?
> 
> 
> Phil
> 
> _______________________________________________
> FreeNAS-testing mailing list
> FreeNAS-testing@...
> http://lists.freenas.org/mailman/listinfo/freenas-testing
Philip Robar | 5 Oct 08:32 2014
Picon

SMB is broken in recent FreeNAS 9.3 M4 builds

Starting with build FreeNAS-9.3-M4-4bae40c-x64 SMB sharing has stopped working on my home LAN for both Windows 8.1 and OS X 10.10 RC1. If I downgrade to build FreeNAS-9.3-M4-4989948-x64 the problem goes away. The issue seems to center around this error:

Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234444, 0] ../lib/util/modules.c:48(load_module)
Oct 5 00:31:28 server smbd[3574]: Error loading module '/usr/local/lib/shared-modules/vfs/aio_pthread.so': Cannot open "/usr/local/lib/shared-modules/vfs/aio_pthread.so"
Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234569, 0] ../source3/smbd/vfs.c:184(vfs_init_custom)
Oct 5 00:31:28 server smbd[3574]: error probing vfs module 'aio_pthread': NT_STATUS_UNSUCCESSFUL
Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234758, 0] ../source3/smbd/vfs.c:349(smbd_vfs_init) Oct 5 00:31:28 server smbd[3574]: smbd_vfs_init: vfs_init_custom failed for aio_pthread
Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234830, 0] ../source3/smbd/service.c:640(make_connection_snum) Oct 5 00:31:28 server smbd[3574]: vfs_init failed for service Media1

Is this a known problem?


Phil

<div><div dir="ltr">Starting with build FreeNAS-9.3-M4-4bae40c-x64 SMB sharing has stopped working on my home LAN for both Windows 8.1 and OS X 10.10 RC1. If I downgrade to build FreeNAS-9.3-M4-4989948-x64 the problem goes away. The issue seems to center around this error:<br><br>Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234444, 0] ../lib/util/modules.c:48(load_module)<br>Oct 5 00:31:28 server smbd[3574]: Error loading module '/usr/local/lib/shared-modules/vfs/aio_pthread.so': Cannot open "/usr/local/lib/shared-modules/vfs/aio_pthread.so"<br>Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234569, 0] ../source3/smbd/vfs.c:184(vfs_init_custom)<br>Oct 5 00:31:28 server smbd[3574]: error probing vfs module 'aio_pthread': NT_STATUS_UNSUCCESSFUL<br>Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234758, 0] ../source3/smbd/vfs.c:349(smbd_vfs_init) Oct 5 00:31:28 server smbd[3574]: smbd_vfs_init: vfs_init_custom failed for aio_pthread<br>Oct 5 00:31:28 server smbd[3574]: [2014/10/05 00:31:28.234830, 0] ../source3/smbd/service.c:640(make_connection_snum) Oct 5 00:31:28 server smbd[3574]: vfs_init failed for service Media1<br><br>Is this a known problem?<br><br><br>Phil<div><br></div>
</div></div>
Sean Fagan | 18 Sep 22:38 2014

Re: 9.3 M4 updating is sloooooow

>> 
> From: Philip Robar <philip.robar@...>
> To: freenas-testing@...
> Subject: [FreeNAS-testing] 9.3 M4 updating is sloooooow
> 
> I recently manually updated from the latest or next to latest 9.3 M3 to 9.3 M4 32a2e11 9/16 nightly. Even
taking into account that I'm now using a very slow 4GB flash drive to accommodate the increased size of 9.3
it seemed like it took a very long time. Since I wasn't expecting this big increase in time I wasn't
specifically keeping track, but I think it took an hour or more to finish.
> 
> Now I'm trying to update to 9.3 M4 eae2f5f 9/17 nightly and the update has been running for 2 or 3 hours. In
both cases the Manual Update progress window stops showing any updates after it gets to this point: "Step 2
of 2" "(2/3) Extracting update" "33%". The successful update never showed anymore progress, it just
suddenly rebooted.
> 
> Is there anything that I can look for in a log file or something like top to have an idea if the update is
actually making progress?
> 
> At what point should I just give up?
> 
> Are upgrades now going to take much longer then they have in the past?

The Manual Update is doing this:

1) Download the tarball containing everything
2) Store it on the specified dataset
3) Extract it.
4) Extract the package files.
5) Creates a new boot environment.
6) Install.
6a) For each package:
	freenas-install sees that it is doing an update, and it is not a delta package,
	so it first removes all of the files (and possibly directories) owned by the package
	in the package database -- from the filesystem and from the package database.
	It runs any scripts this process.  Then it installs each file and directory, into the
	filesystem and database.

A slow thumb drive will in fact be very slow as a result.  (I went and got a PNY USB 3
thumb drive, and enabled xhci via a loader configuration.  Things go much faster
with that.  But that particular thumb drive requires a kernel change, which just got
checked in, so it'll be a day or two.)

You can see what it's doing by "ps auxwww | grep freenas; you should see a python
freenas-install program running.  You can truss or ktrace that if you want to check its
progress.

Using the recommended update will be faster in general, since it not all of the packages
change all the time, and the delta packages, if available, are much smaller.

> Also since switching to 9.3 M3 I've noticed that my system hangs during a reboot after the boot USB2 flash
device is detached--at least that's the last thing printed on the screen. Is this a known problem?

Yes, it has to do with FreeBSD's kernel not releasing the device.  I think it's weird that
the post-database-reboot doesn't hang.

Sean.


Gmane