Dave Cramer | 30 Jun 22:45 2015
Picon

how to remove an event

I managed to create a path which had a user which was wrong, then executed a subscribe set on that path.

I tried unsubscribe set but the event is still in the logs ?


Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 30 Jun 19:45 2015
Picon

using slony to go from 8.4 to 9.4

I'm trying to replicate from 8.4 to 9.4

Getting the following error on the store node

could not access file "$libdir/slony1_funcs.2.2.2" This is occurring when trying to install the functions into the 9.4 node

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Mark Steben | 5 Jun 17:09 2015

bloat maintenance on slony internal tables

Good morning,

I track table bloat on all db tables on master and slave. Some of the slony tables, notably sl_apply_stats and sl_components (on the slave) can grow to 1- 2 thousand times their original size due to bloat.  This is a condition that has been prevalent for some time now.  Is there a utility that
I can run to clean this bloat out or do I need to resort to VACUUM FULL?

Thank you,

--
Mark Steben
 Database Administrator
<at> utoRevenue | Autobase 
  CRM division of Dominion Dealer Solutions 
95D Ashley Ave.
West Springfield, MA 01089

t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
 www.drivedominion.com





_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Picon

Slony error when insert data

Hello,

I am using Slony1-2.2.4, PostgreSQL 9.4.1. I set replication system by documentation. I have I replicated
table named 'test'. When I start replication, it seems all good. But if I insert some data in test at master
node, slon SYNC failed:

ERROR  remoteListenThread_2: "select ev_origin, ev_seqno, ev_timestamp,        ev_snapshot,       
"pg_catalog".txid_snapshot_xmin(ev_snapshot),       
"pg_catalog".txid_snapshot_xmax(ev_snapshot),        ev_type,        ev_data1, ev_data2,        ev_data3, ev_data4,       
ev_data5, ev_data6,        ev_data7, ev_data8 from "_slony_example".sl_event e where (e.ev_origin = '2' and
e.ev_seqno > '5000000004') order by e.ev_origin, e.ev_seqno limit 40" - server closed the connection unexpectedly

-- 
Best regards, Dmitry Voronin
_______________________________________________
Slony1-general mailing list
Slony1-general <at> lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general
Soni M | 17 Apr 05:44 2015
Picon

long 'idle in transaction' from remote slon

Hello All,
2 nodes configured for slony 2.0.7, on RHEL 6.5 using postgres 9.1.14. Each slon manage local postgres.
Slony and RHEL installed from postgres yum repo.

On some occasion, on master db, the cleanupEvent last for long time, up to 5 minutes, normally it finish for a few seconds. The 'truncate sl_log_x' is waiting for a lock which takes most time. This make all write operation to postgres have to wait also, some get failed. As I inspected, what makes truncate wait is another slon transaction made by slon slave process, that is transaction which run 'fetch 500 from LOG'. This transaction left 'idle on transaction' for a long time on some occasion.

Why is this happen ? Is this due to network latency between nodes ? Is there any work around for this?

Many thanks, cheers...

--
Regards,

Soni Maula Harriz
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
David Fetter | 15 Apr 00:56 2015

Multiple slons per node pair?

Folks,

This came up in the context of making slony k-safe for some k>0.

Naively, a simple way to do this would be to have >1 machine, each
running all the slons for a cluster, replacing any machines that fail.

Would Bad Things™ happen as a consequence?

Cheers,
David.
--

-- 
David Fetter <david <at> fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter <at> gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
_______________________________________________
Slony1-general mailing list
Slony1-general <at> lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 18 Mar 14:50 2015
Picon

replicating execute script

Due to the usage of capital letters in the slony cluster execute script fails.

I am looking to replicate execute script for DDL changes. From what I can see execute script takes a lock out on sl_lock before executing the script, and releases it at the end.

What else am I missing ?

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Clement Thomas | 23 Feb 16:51 2015
Picon

sl_log_1 and sl_log_2 tables not cleaned up

Hi All,
          we face a weird problem in our 3 node slony setup.

* node1 (db1.domain.tld )  is the master provider and node2
(db2.domain.tld ), node3  (db3.domain.tld ) are subscribers.
currently nodes have 5 replication sets and the replication is working
fine.
* the problem is sl_log_1 and sl_log_2 tables in node1 gets cleaned up
properly, but the tables in the node2 and node3 doesn't.  On node1 the
total number of rows in sl_log_1 table is 24845 and in sl_log_2 it is
0. whereas

node2:

                         relation                         |  size
----------------------------------------------------------+---------
 _mhb_replication.sl_log_2                                | 130 GB
 _mhb_replication.sl_log_2_idx1                           | 47 GB
 _mhb_replication.PartInd_mhb_replication_sl_log_2-node-1 | 30 GB

node3:
                         relation                         |  size
----------------------------------------------------------+--------
 _mhb_replication.sl_log_2                                | 133 GB
 _mhb_replication.sl_log_2_idx1                           | 47 GB
 _mhb_replication.PartInd_mhb_replication_sl_log_2-node-1 | 30 GB
 _mhb_replication.sl_log_1                                | 352 MB

in node2 and node3 could see the following lines frequently.

slon[20695]: [4031-1] FATAL  cleanupThread: "delete from
"_mhb_replication".sl_log_1 where log_origin = '1' and log_xid <
'2130551154'; delete from
slon[20695]: [4031-2]  "_mhb_replication".sl_log_2 where log_origin =
'1' and log_xid < '2130551154'; delete from
"_mhb_replication".sl_seqlog where
slon[20695]: [4031-3]  seql_origin = '1' and seql_ev_seqno <
'51449379'; select "_mhb_replication".logswitch_finish(); " - ERROR:
canceling statement
slon[20695]: [4031-4]  due to statement timeout
slon[20695]: [4032-1] DEBUG2 slon_retry() from pid=20695

please find the slony_tools.conf here
https://gist.github.com/clement1289/d928acb771ca01a89281 and sl_status
/sl_listen output here
https://gist.github.com/clement1289/88df40f77c03c691eee5 . Hoping for
some help.

Regards,
Clement
Mark Steben | 19 Feb 17:59 2015

Wish to run altperl scripts on master rather than slave

Good morning,

We are running the following on both master and slave: (a simple 1 master to 1 slave configuration)
    postgresql 9.2.5
    slony1-2.2.2
     x86_64 GNU/Linux

We currently run altperl scripts to kill / start slon daemons from the slave:
   cd ...bin folder
   ./slon_kill -c .../slon_tools.....conf
         and
   ./slon_start -c ../slon_tools...conf 1 (and 2)

 Because we need to run maintenance on the replicated db on the master without slony running I would like to run these commands on the master before and after the maintenance.  Since the daemons now run on the slave when I attempt to run these commands on the master the daemons aren't found.  Is
there a prescribed way to accomplish this?  I could continue to run them on the
slave and send a flag to the master when complete but I'd like to take a simpler approach if possible. 
 Any insight appreciated.  Thank you.



--
Mark Steben
 Database Administrator
<at> utoRevenue | Autobase 
  CRM division of Dominion Dealer Solutions 
95D Ashley Ave.
West Springfield, MA 01089

t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
 www.drivedominion.com





_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Mark Steben | 19 Feb 17:38 2015

Fwd: The results of your email commands

Greetings, I put in a question to slony_general_requests and got this back almost immediately.
Does this mean I'm in the queue or have I been bounced out?
Thx, Mark

---------- Forwarded message ----------
From: <slony1-general-bounces-8kkgcvHRObyz5F2/bZa4Fw@public.gmane.org>
Date: Thu, Feb 19, 2015 at 9:12 AM
Subject: The results of your email commands
To: mark.steben-UjXgi8GuFLL8esGaZs7s5AC/G2K4zDHf@public.gmane.org


The results of your email command are provided below. Attached is your
original message.

- Results:
    Ignoring non-text/plain MIME parts

- Unprocessed:
    We are running the following on both master and slave: (a simple 1 master
    to 1 slave configuration)
        postgresql 9.2.5
        slony1-2.2.2
         x86_64 GNU/Linux
    We currently run altperl scripts to kill / start slon daemons from the
    slave:
       cd ...bin folder
       ./slon_kill -c .../slon_tools.....conf
             and
       ./slon_start -c ../slon_tools...conf 1 (and 2)
     Because we need to run maintenance on the replicated db on the master
    without slony running I would like to run these commands on the master
    before and after the maintenance.  Since the daemons now run on the slave
    when I attempt to run these commands on the master the daemons aren't
    found.  Is
    there a prescribed way to accomplish this?  I could continue to run them on
    the
    slave and send a flag to the master when complete but I'd like to take a
    simpler approach if possible.
     Any insight appreciated.  Thank you.

- Ignored:


    --
    *Mark Steben*
     Database Administrator
    <at> utoRevenue <http://www.autorevenue.com/> | Autobase
    <http://www.autobase.net/>
      CRM division of Dominion Dealer Solutions
    95D Ashley Ave.
    West Springfield, MA 01089
    t: 413.327-3045
    f: 413.383-9567

    www.fb.com/DominionDealerSolutions
    www.twitter.com/DominionDealer
     www.drivedominion.com <http://www.autorevenue.com/>

    <http://autobasedigital.net/marketing/DD12_sig.jpg>

- Done.



---------- Forwarded message ----------
From: Mark Steben <mark.steben <at> drivedominion.com>
To: slony1-general-request-8kkgcvHRObyz5F2/bZa4Fw@public.gmane.org
Cc: 
Date: Thu, 19 Feb 2015 09:15:06 -0500
Subject: slon_kill, slon_start to run on master
Good morning,

We are running the following on both master and slave: (a simple 1 master to 1 slave configuration)
    postgresql 9.2.5
    slony1-2.2.2
     x86_64 GNU/Linux

We currently run altperl scripts to kill / start slon daemons from the slave:
   cd ...bin folder
   ./slon_kill -c .../slon_tools.....conf
         and
   ./slon_start -c ../slon_tools...conf 1 (and 2)

 Because we need to run maintenance on the replicated db on the master without slony running I would like to run these commands on the master before and after the maintenance.  Since the daemons now run on the slave when I attempt to run these commands on the master the daemons aren't found.  Is
there a prescribed way to accomplish this?  I could continue to run them on the
slave and send a flag to the master when complete but I'd like to take a simpler approach if possible. 
 Any insight appreciated.  Thank you.


--
Mark Steben
 Database Administrator
<at> utoRevenue | Autobase 
  CRM division of Dominion Dealer Solutions 
95D Ashley Ave.
West Springfield, MA 01089

t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
 www.drivedominion.com









--
Mark Steben
 Database Administrator
<at> utoRevenue | Autobase 
  CRM division of Dominion Dealer Solutions 
95D Ashley Ave.
West Springfield, MA 01089

t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
 www.drivedominion.com





_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Tory M Blue | 5 Feb 19:19 2015
Picon

sl_log_1 not truncated, could not lock

2015-02-05 09:53:29 PST clsdb postgres 10.13.200.232(54830) 51877 2015-02-05 09:53:29.976 PSTNOTICE:  Slony-I: log switch to sl_log_2 complete - truncate sl_log_1

2015-02-05 10:01:34 PST clsdb postgres 10.13.200.231(46083) 42459 2015-02-05 10:01:34.481 PSTNOTICE:  Slony-I: could not lock sl_log_1 - sl_log_1 not truncated

Sooo I have 13 million rows in sl_log_1 and from my checks of various tables things are replicated, but this table still has a lock and is not being truncated, These errors have been happening since 12:08 AM..

my sl_log_2 table now has 8 million rows but I'm replicating and not adding a bunch of data. We did some massive deletes last nights, 20million was the last batch when things stopped seeming to switch and truncate.

Soooo, questions. How can I verify sl_log_1 can be truncated (everything in it has been replicated) and how can I figure out what is locking, so that slony can't truncate?

I'm not hurting, just stressing at this point

Thanks
tory
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general

Gmane