Voytek Eymont | 1 Jul 03:37 2012
Picon

ot: trying to understand log entries, etc

totally of, I'm trying to understand what's happening,
aka, 'how does email works' ?

I have Dovecot 1.x, all works fine

I noticed in the logs failed access from no-longer-hosted account/user,
physical account deleted/removed, most likely from user's BBerry, every
~15 minutes:

from /var/log/maillog
---
Jun 24 07:31:25 dovecot: imap-login: Disconnected: Inactivity (auth
failed, 1 attempts): user=<vvv <at> tld>, method=PLAIN, rip=101.222.222.222,
lip=111.111.111.111
---

I recreated account/physical path /var/mail/.../tld/vvv <at> tld
I logged in with webmail, all works, webmail logged out

I physically copied some emails (from another account) to physical path

../../tld/vvv <at> tld/new

after a while, I see the email files NO LONGER in ../../../new BUT now in
../../tld/vvv <at> tld/cur

Q: that means the email was accessed, doesn't it ?

BUT, I don't see any more access in the /var/log/maillog, not failed, not OK

(Continue reading)

Stan Hoeppner | 1 Jul 09:17 2012

Re: RAID1+md concat+XFS as mailstorage

On 6/30/2012 6:17 AM, Костырев Александр Алексеевич wrote:
> So, you say that one should use this configuration in production with
> hope that such failure would never happen?

No, I'm saying you are trolling.  A concat of RAID1 pairs has
reliability identical to RAID10.  I don't see you ripping a mirror pair
from a RAID10 array and saying RAID10 sucks.  Your argument has several
flaws.

In a production environment, a dead drive will be replaced and rebuilt
before the partner fails.  In a production environment, the mirror pairs
will be duplexed across two SAS/SATA controllers.

Duplexing the mirrors makes a concat/RAID1, and a properly configured
RAID10, inherently more reliable than RAID5 or RAID6, which simply can't
be protected against controller failure.

By stating the concat/RAID1 configuration is unreliable simply shows
your ignorance of storage system design and operation.

--

-- 
Stan

> -----Original Message-----
> From: dovecot-bounces <at> dovecot.org [mailto:dovecot-bounces <at> dovecot.org] On Behalf Of Stan Hoeppner
> Sent: Saturday, June 30, 2012 4:24 PM
> To: dovecot <at> dovecot.org
> Subject: Re: [Dovecot] RAID1+md concat+XFS as mailstorage
> 
> On 6/28/2012 7:15 AM, Ed W wrote:
(Continue reading)

Charles Marcus | 1 Jul 12:34 2012

Re: RAID1+md concat+XFS as mailstorage

On 2012-06-29 12:07 PM, Ed W <lists <at> wildgooses.com> wrote:
> On 29/06/2012 12:15, Charles Marcus wrote:
>> Depends on what you mean exactly by 'incorrect'...

> I'm sorry, this wasn't meant to be an attack on you,

No worries - it wasn't taken that way - I simply disagreed with the main 
point you were making, and still do. While I do agree there is some 
truth to the issue you have raised, I just don't see it as quite the 
disaster-in-waiting that you do. I have been running small RAID setups 
for quite a while, and while I had one older RAID5 (with NO hot spare) 
that I inherited (this was many years ago) that gave me fits for about a 
month once (had drives randomly 'failing', but a rebuild - which took a 
few HOURS, and this was with small (by today's standards - 120GB drives) 
would fix it, then another one would do drop out 2 or 3 days later, etc. 
I finally found an identical replacement controller on ebay (old 3ware 
card) and once it was replaced it fixed the problem). I also had one 
instance in a RAID10 setup I configured myself a few years ago where one 
of the pairs had some errors on an unclean shutdown (this was after 
about 3 years of 24/7 operation on a mail server) and went into 
automatic rebuild, which went smoothly (and was mucho faster than the 
RAID5 rebuilds were even though the drives were much bigger).

So, yes, while I acknowledge the risk, it is the risk we all run storing 
data on hard drives.

> I thought I was pointing out what is now fairly obvious stuff, but
> it's only recently that the maths has been popularised by the common
> blogs on the interwebs. Whilst I guess not everyone read the flurry
> of blog articles about this last year, I think it's due to be
(Continue reading)

Charles Marcus | 1 Jul 12:48 2012

Re: RAID1+md concat+XFS as mailstorage

On 2012-07-01 3:17 AM, Stan Hoeppner <stan <at> hardwarefreak.com> wrote:
> In a production environment, the mirror pairs will be duplexed across
> two SAS/SATA controllers.
>
> Duplexing the mirrors makes a concat/RAID1, and a properly configured
> RAID10, inherently more reliable than RAID5 or RAID6, which simply can't
> be protected against controller failure.

Stan, am I correct that this - dual/redundant controllers - is the 
reason that a real SAN is more reliable than just running local storage 
on a mod-high end server?

--

-- 

Best regards,

Charles

Michael Brian Bentley | 1 Jul 22:04 2012

Config off by a nuance or a gross?

Hi,

I am trying to establish an IMAP mail service accessible using a current 
Thunderbird on a laptop.

I used Macports to install Dovecot2 (2.1.5) on an older Snow Leopard Mac 
mini with Intel Core Duo (specifically a Macmini1,1). My goal is to be 
able to run sieve under dovecot2 and offload the email triage from my 
work laptop to the little mail server.

Mail appears to show up on the server just fine. I'm having trouble 
getting Tbird to log in and access properly.

Because there are so many components in play (Tbird, pam, Dovecot2, 
macports, OS X Snow Leopard 10.6.8) (dovecot by itself seems to have 
quite a few settings), it is hard to tell what bit is out of whack.

The message I get from Tbird 13.0.1 for OS X (on Lion) is:

Alert: The IMAP server bentley on TheMini does not support the selected 
authentication method. Please change the 'Authentication method' in the 
'Account Settings | Server settings'.

When I set up the account, Tbird's automatic configuration invents a 
fake mail service based on my domain name, but appears to configure 
things sensibly once I switch to the manual configuration and fill in 
the relevant IP information. It does not appear to care whether it has a 
password or not.

When I set up the account manually on TBird, I let it set up as:
(Continue reading)

Stan Hoeppner | 2 Jul 01:12 2012

Re: RAID1+md concat+XFS as mailstorage

On 7/1/2012 5:48 AM, Charles Marcus wrote:
> On 2012-07-01 3:17 AM, Stan Hoeppner <stan <at> hardwarefreak.com> wrote:
>> In a production environment, the mirror pairs will be duplexed across
>> two SAS/SATA controllers.
>>
>> Duplexing the mirrors makes a concat/RAID1, and a properly configured
>> RAID10, inherently more reliable than RAID5 or RAID6, which simply can't
>> be protected against controller failure.
> 
> Stan, am I correct that this - dual/redundant controllers - is the
> reason that a real SAN is more reliable than just running local storage
> on a mod-high end server?

In this case I was simply referring to using two PCIe SAS HBAs in a
server, mirroring drive pairs across the HBAs with md, then
concatenating the RAID1 pairs with md --linear.  This gives protection
against all failure modes.  You can achieve the former with RAID5/6 but
not the latter.  Consider something like:

2x http://www.lsi.com/products/storagecomponents/Pages/LSISAS9200-8e.aspx
2x
http://www.dataonstorage.com/dataon-products/6g-sas-jbod/dns-1640-2u-24-bay-6g-25inch-sassata-jbod.html
48x Seagate ST9300605SS 300GB SAS 10k RPM

This hardware yields a high IOPS, high concurrency, high performance
mail store.  Drives are mirrored across HBAs and JBODs.  Each HBA is
connected to an expander/controller in both chassis, yielding full path
redundancy.  Each controller can see every disk in both enclosures.
With this setup and SCSI multipath, you have redundancy against drive,
HBA, cable, expander, and chassis failure.  You can't get any more
(Continue reading)

Angel L. Mateo | 2 Jul 08:49 2012
Picon

Re: lmtp proxy timeout while waiting for reply to DATA reply

El 29/06/12 22:33, Daniel Parthey escribió:
> Timo Sirainen wrote:
>> On Sat, 2012-04-28 at 13:00 +0200, Daniel Parthey wrote:
>>
>>> we are experiencing similar sporadic data timeout issues with dovecot 2.0.20
>>> as in http://dovecot.org/pipermail/dovecot/2011-June/059807.html
>>> at least once a week. Some mails get temporarily deferred in the
>>> postfix queue since dovecot director lmtp refuses them and the
>>> mails are delivered at a later time.
>>
>> What isn't in v2.0 is the larger rewrite of the LMTP proxying
>> code in v2.1, which I hope fixes also this timeout problem.
>
> Same problem persists after update to 2.1.7, especially for distribution
> lists which contain several target email addresses which are then
> pipelined by postfix through a single lmtp proxy connection:
>
> Jun 29 10:14:03 10.129.3.233 postfix/lmtp[29674]: 00318C090: to=<user01 <at> example.org>,
orig_to=<email01 <at> example.org>, relay=127.0.0.1[127.0.0.1]:20024, delay=31,
delays=1/0.16/0.01/30, dsn=4.4.0, status=deferred (host 127.0.0.1[127.0.0.1] said: 451 4.4.0
Remote server not answering (timeout while waiting for reply to DATA reply) (in reply to end of DATA command))
> Jun 29 10:14:03 10.129.3.233 postfix/lmtp[29674]: 00318C090: to=<user02 <at> example.org>,
orig_to=<email02 <at> example.org>, relay=127.0.0.1[127.0.0.1]:20024, delay=31,
delays=1/0.16/0.01/30, dsn=4.4.0, status=deferred (host 127.0.0.1[127.0.0.1] said: 451 4.4.0
Remote server not answering (timeout while waiting for reply to DATA reply) (in reply to end of DATA command))
> Jun 29 10:14:03 10.129.3.233 postfix/lmtp[29674]: 00318C090: to=<user03 <at> example.org>,
orig_to=<email03 <at> example.org>, relay=127.0.0.1[127.0.0.1]:20024, delay=31,
delays=1/0.16/0.01/30, dsn=4.4.0, status=deferred (host 127.0.0.1[127.0.0.1] said: 451 4.4.0
Remote server not answering (timeout while waiting for reply to DATA reply) (in reply to end of DATA command))
> Jun 29 10:14:03 10.129.3.233 postfix/lmtp[29674]: 00318C090: to=<user04 <at> example.org>,
(Continue reading)

Angel L. Mateo | 2 Jul 08:53 2012
Picon

Re: director directing to wrong server (sometimes)

El 30/06/12 03:51, Daniel Parthey escribió:
> Hi Angel,
>
> Angel L. Mateo wrote:
>> I have a user, its assigned server is 155.54.211.164. The problem
>> is that I don't know why director sent him yesterday to a different
>> server, because my server was up all the time. Moreover, I'm using
>> poolmon in director servers to check availability of final servers
>> and it didn't report any problem with the server.
>
> Which version of dovecot are you using?
> "doveconf -n" of director and mailbox instance?
>
	Sorry. Here you have them

> You should monitor the output of
>    doveadm director status username <at> example.org
>    doveadm director ring status
> on each of the directors over time with a timestamp.
>
> This might shed some light on where the user is directed and why,
> and ring status will tell which directors can see each other.
> doveadm director move can also influence where a user is sent,
> but this will be reflected by "Current:" entry of director status,
> there you can also find the time when the entry in hashtable
> will expire.
>
	I have running poolmon. It didn't report any problem in its logs. I 
have also check all dovecot logs and I don't have any error.

(Continue reading)

Timo Sirainen | 2 Jul 09:10 2012
Picon
Picon

Re: lmtp proxy timeout while waiting for reply to DATA reply

On 2.7.2012, at 9.49, Angel L. Mateo wrote:

> 	My problem was that this timeout seems to be counted from the beginning of the LMTP connection, so when I
have a lot of recipients in the same connection, last ones sometimes timedout. I solved it increasing this
timeout with proxy_timeout option and reducing max number of lmtp recipients in postfix.

Ah, interesting. These should help:

http://hg.dovecot.org/dovecot-2.1/rev/27dccff46fe9
http://hg.dovecot.org/dovecot-2.1/rev/8a97daa8aff6

Timo Sirainen | 2 Jul 09:13 2012
Picon
Picon

Re: lmtp proxy timeout while waiting for reply to DATA reply

On 2.7.2012, at 10.10, Timo Sirainen wrote:

> On 2.7.2012, at 9.49, Angel L. Mateo wrote:
> 
>> 	My problem was that this timeout seems to be counted from the beginning of the LMTP connection, so when I
have a lot of recipients in the same connection, last ones sometimes timedout. I solved it increasing this
timeout with proxy_timeout option and reducing max number of lmtp recipients in postfix.
> 
> Ah, interesting. These should help:
> 
> http://hg.dovecot.org/dovecot-2.1/rev/27dccff46fe9
> http://hg.dovecot.org/dovecot-2.1/rev/8a97daa8aff6

Plus http://hg.dovecot.org/dovecot-2.1/rev/569588ff7ef0 although I'm not entirely sure if it's
needed. The LMTP code is rather ugly and difficult to follow..

Gmane