Steve Singer | 27 Jul 04:03 2015

Slonik API

During the slony session at the PGCON unconference some people asked 
about an API so they could manipulate slony clusters without having to 
call slonik.

I've done a draft attempt at moving slonik functionality into a library.

You can view my draft API here
https://github.com/ssinger/slony1-engine/blob/libslonik/src/slonik/slonik_api.h

A sample program that uses it might look like

        struct SlonikContext * context;

         SlonikApi_NodeConnInfo n1;
         n1.no_id=1;
         n1.conninfo="host=localhost dbname=test1 port=5435";
         SlonikApi_NodeConnInfo ** n_list =malloc(sizeof( 
SlonikApi_NodeConnInfo *)*2);
         n_list[0]=&n1;
         n_list[1]=NULL;
         context = slonik_api_init_context("disorder_replica",n_list);
         slonik_api_sync(context,1);
         slonik_api_subscribe_set(context,1,1,2,0,0);

The idea is that you would setup a structure with the conninfo 
configuration and init your slony context.  You would then pass this 
context to the function call for the slonik command you want to call.

I'd like to get a sense of how people feel about this API.

(Continue reading)

David Fetter | 25 Jul 06:31 2015

Cloning an origin?

Folks,

While in the best of all possible worlds, we'd have planned out a
replication strategy before we get tables whose initial sync via
"SUBSCRIBE SET" will never finish, we aren't always given that much
ability to plan that soon.

CLONE is great when you want to light an Nth node for N > 2, but
that's just adjusting an extant cluster, not creating one in the first
place.

What stands between the state of the slony code and being able to
clone an origin node?

Cheers,
David.
--

-- 
David Fetter <david@...> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter@...

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
Dave Cramer | 30 Jun 22:45 2015
Picon

how to remove an event

I managed to create a path which had a user which was wrong, then executed a subscribe set on that path.

I tried unsubscribe set but the event is still in the logs ?


Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 30 Jun 19:45 2015
Picon

using slony to go from 8.4 to 9.4

I'm trying to replicate from 8.4 to 9.4

Getting the following error on the store node

could not access file "$libdir/slony1_funcs.2.2.2" This is occurring when trying to install the functions into the 9.4 node

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Mark Steben | 5 Jun 17:09 2015

bloat maintenance on slony internal tables

Good morning,

I track table bloat on all db tables on master and slave. Some of the slony tables, notably sl_apply_stats and sl_components (on the slave) can grow to 1- 2 thousand times their original size due to bloat.  This is a condition that has been prevalent for some time now.  Is there a utility that
I can run to clean this bloat out or do I need to resort to VACUUM FULL?

Thank you,

--
Mark Steben
 Database Administrator
<at> utoRevenue | Autobase 
  CRM division of Dominion Dealer Solutions 
95D Ashley Ave.
West Springfield, MA 01089

t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
 www.drivedominion.com





_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Picon

Slony error when insert data

Hello,

I am using Slony1-2.2.4, PostgreSQL 9.4.1. I set replication system by documentation. I have I replicated
table named 'test'. When I start replication, it seems all good. But if I insert some data in test at master
node, slon SYNC failed:

ERROR  remoteListenThread_2: "select ev_origin, ev_seqno, ev_timestamp,        ev_snapshot,       
"pg_catalog".txid_snapshot_xmin(ev_snapshot),       
"pg_catalog".txid_snapshot_xmax(ev_snapshot),        ev_type,        ev_data1, ev_data2,        ev_data3, ev_data4,       
ev_data5, ev_data6,        ev_data7, ev_data8 from "_slony_example".sl_event e where (e.ev_origin = '2' and
e.ev_seqno > '5000000004') order by e.ev_origin, e.ev_seqno limit 40" - server closed the connection unexpectedly

-- 
Best regards, Dmitry Voronin
_______________________________________________
Slony1-general mailing list
Slony1-general <at> lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general
Soni M | 17 Apr 05:44 2015
Picon

long 'idle in transaction' from remote slon

Hello All,
2 nodes configured for slony 2.0.7, on RHEL 6.5 using postgres 9.1.14. Each slon manage local postgres.
Slony and RHEL installed from postgres yum repo.

On some occasion, on master db, the cleanupEvent last for long time, up to 5 minutes, normally it finish for a few seconds. The 'truncate sl_log_x' is waiting for a lock which takes most time. This make all write operation to postgres have to wait also, some get failed. As I inspected, what makes truncate wait is another slon transaction made by slon slave process, that is transaction which run 'fetch 500 from LOG'. This transaction left 'idle on transaction' for a long time on some occasion.

Why is this happen ? Is this due to network latency between nodes ? Is there any work around for this?

Many thanks, cheers...

--
Regards,

Soni Maula Harriz
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
David Fetter | 15 Apr 00:56 2015

Multiple slons per node pair?

Folks,

This came up in the context of making slony k-safe for some k>0.

Naively, a simple way to do this would be to have >1 machine, each
running all the slons for a cluster, replacing any machines that fail.

Would Bad Things™ happen as a consequence?

Cheers,
David.
--

-- 
David Fetter <david <at> fetter.org> http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter      XMPP: david.fetter <at> gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate
_______________________________________________
Slony1-general mailing list
Slony1-general <at> lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 18 Mar 14:50 2015
Picon

replicating execute script

Due to the usage of capital letters in the slony cluster execute script fails.

I am looking to replicate execute script for DDL changes. From what I can see execute script takes a lock out on sl_lock before executing the script, and releases it at the end.

What else am I missing ?

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Clement Thomas | 23 Feb 16:51 2015
Picon

sl_log_1 and sl_log_2 tables not cleaned up

Hi All,
          we face a weird problem in our 3 node slony setup.

* node1 (db1.domain.tld )  is the master provider and node2
(db2.domain.tld ), node3  (db3.domain.tld ) are subscribers.
currently nodes have 5 replication sets and the replication is working
fine.
* the problem is sl_log_1 and sl_log_2 tables in node1 gets cleaned up
properly, but the tables in the node2 and node3 doesn't.  On node1 the
total number of rows in sl_log_1 table is 24845 and in sl_log_2 it is
0. whereas

node2:

                         relation                         |  size
----------------------------------------------------------+---------
 _mhb_replication.sl_log_2                                | 130 GB
 _mhb_replication.sl_log_2_idx1                           | 47 GB
 _mhb_replication.PartInd_mhb_replication_sl_log_2-node-1 | 30 GB

node3:
                         relation                         |  size
----------------------------------------------------------+--------
 _mhb_replication.sl_log_2                                | 133 GB
 _mhb_replication.sl_log_2_idx1                           | 47 GB
 _mhb_replication.PartInd_mhb_replication_sl_log_2-node-1 | 30 GB
 _mhb_replication.sl_log_1                                | 352 MB

in node2 and node3 could see the following lines frequently.

slon[20695]: [4031-1] FATAL  cleanupThread: "delete from
"_mhb_replication".sl_log_1 where log_origin = '1' and log_xid <
'2130551154'; delete from
slon[20695]: [4031-2]  "_mhb_replication".sl_log_2 where log_origin =
'1' and log_xid < '2130551154'; delete from
"_mhb_replication".sl_seqlog where
slon[20695]: [4031-3]  seql_origin = '1' and seql_ev_seqno <
'51449379'; select "_mhb_replication".logswitch_finish(); " - ERROR:
canceling statement
slon[20695]: [4031-4]  due to statement timeout
slon[20695]: [4032-1] DEBUG2 slon_retry() from pid=20695

please find the slony_tools.conf here
https://gist.github.com/clement1289/d928acb771ca01a89281 and sl_status
/sl_listen output here
https://gist.github.com/clement1289/88df40f77c03c691eee5 . Hoping for
some help.

Regards,
Clement
Mark Steben | 19 Feb 17:59 2015

Wish to run altperl scripts on master rather than slave

Good morning,

We are running the following on both master and slave: (a simple 1 master to 1 slave configuration)
    postgresql 9.2.5
    slony1-2.2.2
     x86_64 GNU/Linux

We currently run altperl scripts to kill / start slon daemons from the slave:
   cd ...bin folder
   ./slon_kill -c .../slon_tools.....conf
         and
   ./slon_start -c ../slon_tools...conf 1 (and 2)

 Because we need to run maintenance on the replicated db on the master without slony running I would like to run these commands on the master before and after the maintenance.  Since the daemons now run on the slave when I attempt to run these commands on the master the daemons aren't found.  Is
there a prescribed way to accomplish this?  I could continue to run them on the
slave and send a flag to the master when complete but I'd like to take a simpler approach if possible. 
 Any insight appreciated.  Thank you.



--
Mark Steben
 Database Administrator
<at> utoRevenue | Autobase 
  CRM division of Dominion Dealer Solutions 
95D Ashley Ave.
West Springfield, MA 01089

t: 413.327-3045
f: 413.383-9567

www.fb.com/DominionDealerSolutions
www.twitter.com/DominionDealer
 www.drivedominion.com





_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general

Gmane