Dane Miller | 1 Nov 06:52 2007
Picon

trace the path of replicated data

Hi Slony folks,

I've been reading docs trying to understand how Slony works.  I've tried to outline below my understanding
of how an INSERT into the origin gets replicated to the subscriber.  Do I get the gist of it here?  If not, could
someone be so kind as to correct me?  I hope this isn't too much of a newbie question, but I didn't find an
answer in the admin guide on slony.info.

Tracing an insert from the origin to a subscriber (2 node cluster)

1. INSERT into the origin
2. AFTER ROW trigger executes logtrigger which inserts into sl_log_{1,2}
3. origin's slon checks sl_log_{1,2}, inserts event into sl_event, generates NOTIFY
4. subscriber receives NOTIFY, inserts event into its sl_event table, inserts confirmation into its
local sl_confirm table.
5. subscriber inserts event into origin's sl_confirm table
6. subscriber queries origin's sl_log_{1,2} using event info
7. Subscriber applies result of #6 to its local tables.

whew!  If this has been covered elsewhere, please point me in the right direction.  I got this far by reading
Slony-I-concept.pdf, but it seemed to cover more complex setups, and I got a bit lost in the detail.

Thanks,
Dane
stephen.hindmarch | 1 Nov 12:30 2007

Bugs in configure-replication.sh

I've been using configure-replication.sh to make my life easier
preparing slony installation script and I've fixed up a couple of bugs
along the way. I have been using version 1.2.6 but have just checked CVS
and the bugs are still in trunk.

Both bugs are to do with missing new lines, one in the store_paths
function, the other in the code that generates "add sequence" lines.

I have create a patch file which is here.

===BEGIN===
--- slony1-1.2.6/tools/configure-replication.sh	2006-12-14
22:31:19.000000000 +0000
+++ slony1-patch/tools/configure-replication.sh	2007-11-01
10:57:42.000000000 +0000
 <at>  <at>  -55,7 +55,8  <at>  <at> 
           eval buser=\$USER${j}
           eval bport=\$PORT${j}
           if [ -n "${bdb}" -a "${bhost}" -a "${buser}" -a "${bport}" ];
then
-            echo "STORE PATH (SERVER=${i}, CLIENT=${j},
CONNINFO='dbname=${db} host=${host} user=${user} port=${port}');" >>
$mktmp/store_paths.slonik          else
+            echo "STORE PATH (SERVER=${i}, CLIENT=${j},
CONNINFO='dbname=${db} host=${host} user=${user} port=${port}');" >>
$mktmp/store_paths.slonik
+          else
             echo "STORE PATH (SERVER=${i}, CLIENT=${j},
CONNINFO='dbname=${db} host=${host} user=${user} port=${port}');" >>
$mktmp/store_paths.slonik
(Continue reading)

Geoffrey | 1 Nov 13:48 2007

questions about our slony solution

I probably should have posted this a long while ago.  We are in the 
process of setting up a single slony slave for the sake of disaster 
recovery.  Our set up is relatively complex, and I thought I'd throw out 
some questions regarding it, in the hope that anyone here on the list 
might point out issues that may stop us from completing this process 
successfully.  My apologies for the length of this post in advance.

Our current production setup has two boxes that share a data silo.  A 
hot-hot scenario where roughly half the postmasters run from one 
machine, the other half from the other.  In the event of one machine 
failure, the live box picks up all the databases.

We are running Red Hat Advance Server 4, postgresql 7.4.18.

Our plan is to replicate all existing databases to another box, for the 
short term, it will reside in the same location as the master boxes. 
Long term, it will be moved to a data center in Nashville, TN, we are in 
Gainesville GA.

We currently have 9 postmasters running on one machine, 8 on the other. 
  This number will likely grow.

My plan is to 'turn on' slony for each database one at a time, in order 
to reduce the initial hit on the servers as slony performs the initial 
replication of the data in each database. The existing databases' 
directories range from 407M to 8.5G, most of which are in the 1.5G range.

The majority of the databases have identical schemas.  Will I be 
configuring separate instances of slony per database, or will a single 
instance handle multiple databases?
(Continue reading)

Christopher Browne | 1 Nov 14:47 2007

Re: trace the path of replicated data

Dane Miller <dane@...> writes:

> Hi Slony folks,
>
> I've been reading docs trying to understand how Slony works.  I've tried to outline below my understanding
of how an INSERT into the origin gets replicated to the subscriber.  Do I get the gist of it here?  If not, could
someone be so kind as to correct me?  I hope this isn't too much of a newbie question, but I didn't find an
answer in the admin guide on slony.info.
>
> Tracing an insert from the origin to a subscriber (2 node cluster)
>
> 1. INSERT into the origin
> 2. AFTER ROW trigger executes logtrigger which inserts into sl_log_{1,2}
> 3. origin's slon checks sl_log_{1,2}, inserts event into sl_event, generates NOTIFY
> 4. subscriber receives NOTIFY, inserts event into its sl_event table, inserts confirmation into its
local sl_confirm table.
> 5. subscriber inserts event into origin's sl_confirm table
> 6. subscriber queries origin's sl_log_{1,2} using event info
> 7. Subscriber applies result of #6 to its local tables.
>
> whew!  If this has been covered elsewhere, please point me in the right direction.  I got this far by reading
Slony-I-concept.pdf, but it seemed to cover more complex setups, and I got a bit lost in the detail.

That's about right.

It's worth noting that the NOTIFY isn't run for each update to a
replicated table - it happens more on a cronological basis.
--

-- 
let name="cbbrowne" and tld="linuxdatabases.info" in String.concat " <at> " [name;tld];;
http://linuxdatabases.info/info/nonrdbms.html
(Continue reading)

Cyril SCETBON | 2 Nov 10:32 2007
Picon

Re: Move SET seems not to work properly


Cyril SCETBON wrote:

> Hi,
>
>
>
> I get the following configuration :
>
>
> select * from "_CLUSTER1".sl_subscribe;
> sub_set | sub_provider | sub_receiver | sub_forward | sub_active
> ---------+--------------+--------------+-------------+------------
>       1 |            1 |            2 | t           | t
>       1 |            1 |            3 | t           | t
>       1 |            3 |            4 | t           | t
>
>
> I used the following slonik script to make node 2 the new provider :
>
>
> CLUSTER NAME = CLUSTER1;
> NODE 1 ADMIN CONNINFO = 'dbname=dbrep host=host1 user=postgres 
> port=5433';
> NODE 2 ADMIN CONNINFO = 'dbname=dbrep host=host2 user=postgres 
> port=5434';
> NODE 3 ADMIN CONNINFO = 'dbname=dbrep host=host3 user=postgres 
> port=5435';
> NODE 4 ADMIN CONNINFO = 'dbname=dbrep host=host4 user=postgres 
> port=5436';
(Continue reading)

Christopher Browne | 2 Nov 16:38 2007

Re: Bugs in configure-replication.sh

<stephen.hindmarch@...> writes:
> I've been using configure-replication.sh to make my life easier
> preparing slony installation script and I've fixed up a couple of bugs
> along the way. I have been using version 1.2.6 but have just checked CVS
> and the bugs are still in trunk.
>  
> Both bugs are to do with missing new lines, one in the store_paths
> function, the other in the code that generates "add sequence" lines.
>
> I have create a patch file which is here.

Oddly, I was unable to use the patch.  There was something odd about
the format of it.

At any rate, I have checked this change into HEAD.  I'd like to hold
it off in 1.2 until AFTER the 1.2.12 release, just so that we can say
"Yup, 1.2.12 is baked, and has had no changes all week."

I need to add a further option to this to allow the Gentle User to
specify a path for the slonik scripts to live in, that way I could
actually use this to help set up some of the regression tests, and
therefore draw this script into the set of "things tested."  That
change will definitely only take place in HEAD...

I wrote this script up last year while on a flight to Ohio (I spoke at
their annual "LinuxFest"), and initially set it up as a contribution
to the LedgerSMB project, to help make it easy for their users to
configure replication of their tables.  I was pretty happy with it.
I'm glad you like it.
--

-- 
(Continue reading)

Cyril SCETBON | 6 Nov 10:39 2007
Picon

Re: Move SET seems not to work properly

How is it possible that node 1 is the provider of node 2 and in the same 
time node 2 is the provider of node 1 ??

Has someone played with this feature ?

thanks.

Cyril SCETBON wrote:

> Hi,
>
>
>
> I get the following configuration :
>
>
> select * from "_CLUSTER1".sl_subscribe;
> sub_set | sub_provider | sub_receiver | sub_forward | sub_active
> ---------+--------------+--------------+-------------+------------
>       1 |            1 |            2 | t           | t
>       1 |            1 |            3 | t           | t
>       1 |            3 |            4 | t           | t
>
>
> I used the following slonik script to make node 2 the new provider :
>
>
> CLUSTER NAME = CLUSTER1;
> NODE 1 ADMIN CONNINFO = 'dbname=dbrep host=host1 user=postgres 
> port=5433';
(Continue reading)

Geoffrey | 6 Nov 13:38 2007

matching versions of Postgresql on both master and slave necessary?

Is it absolutely necessary to have the same versions of Postgres on both 
the master and slave?  Point is, we are currently running 7.4.16 on the 
machines I plan to replicate, but thought I'd go with the lastest 7.4 
version (18).  My ignorant assumption is that if you're going to have a 
mis-match in versions, it's best to have the slave newer then the master.

--

-- 
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
  - Benjamin Franklin
Geoffrey | 6 Nov 14:10 2007

custom functions necessary on the slave machine?

We have some custom functions built into the Postgresql backend that are 
used by our application.  I'm assuming that it's not necessary to have 
these installed on the slave until such time as we may need to use it 
for a replacement of the master.

--

-- 
Until later, Geoffrey

Those who would give up essential Liberty, to purchase a little
temporary Safety, deserve neither Liberty nor Safety.
  - Benjamin Franklin
Jan Matousek | 6 Nov 15:16 2007

Re: matching versions of Postgresql on both master and slave necessary?

Hi,

I believe that the cross-version replication is possible.
I have experience with 8.1 (master) and 8.2 (slave) replication, and
it works fine.

j.

On Nov 6, 2007 1:38 PM, Geoffrey <lists@...> wrote:
> Is it absolutely necessary to have the same versions of Postgres on both
> the master and slave?  Point is, we are currently running 7.4.16 on the
> machines I plan to replicate, but thought I'd go with the lastest 7.4
> version (18).  My ignorant assumption is that if you're going to have a
> mis-match in versions, it's best to have the slave newer then the master.
>
> --
> Until later, Geoffrey
>
> Those who would give up essential Liberty, to purchase a little
> temporary Safety, deserve neither Liberty nor Safety.
>   - Benjamin Franklin
> _______________________________________________
> Slony1-general mailing list
> Slony1-general@...
> http://lists.slony.info/mailman/listinfo/slony1-general
>

Gmane