umang singh | 28 Aug 20:58 2014
Picon

Unavailability of a node and time outs

Hi All,

The slony documentation mentions that Slony-I is a system designed for use at data centers and backup sites, where the normal mode of operation is that all nodes are available. 

I am using slony to replicate the database between 2 nodes.In case the node housing the master database is unavailable, does slony running on the node running the slave database timeout trying to connect to the master database or will it try to attempt connecting to the master database indefinitely?

When the mode housing the master DB is down, I am getting repeated "could not connect to server: No route to host" logs on the slave node and it is not entirely clear whether slony is generating this log or whether it is because of some other psql command.

Regards,
Umang
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Sandeep Thakkar | 25 Aug 09:27 2014

Slony1-2.2.3 sources fails to compile on Linux against PG9.4

Hi

While building Slony1-2.2.3 tarball on Linux against PG9.4, I observed that the configure failed for the following reason:-

--
configure:5703: checking for pgport
configure:5729: gcc -o conftest -g -O2   -I/mnt/hgfs/pginstaller/server/staging/linux-x64/include/postgresql/server/  -L/mnt/hgfs/pginstaller/server/staging/linux-x64/lib/ conftest.c  -lpgcommon >&5
configure:5729: $? = 0
configure:5731: result: yes
configure:5763: gcc -o conftest -g -O2   -I/mnt/hgfs/pginstaller/server/staging/linux-x64/include/postgresql/server/  -L/mnt/hgfs/pginstaller/server/staging/linux-x64/lib/ conftest.c  -lpgport  -lpgcommon >&5
/mnt/hgfs/pginstaller/server/staging/linux-x64/lib//libpgcommon.a(exec.o): In function `resolve_symlinks':
exec.c:(.text+0x1a6): undefined reference to `last_dir_separator'
exec.c:(.text+0x1f5): undefined reference to `strlcpy'
exec.c:(.text+0x219): undefined reference to `join_path_components'
exec.c:(.text+0x221): undefined reference to `canonicalize_path'
/mnt/hgfs/pginstaller/server/staging/linux-x64/lib//libpgcommon.a(exec.o): In function `find_my_exec':
exec.c:(.text+0x36d): undefined reference to `first_dir_separator'
exec.c:(.text+0x38c): undefined reference to `join_path_components'
exec.c:(.text+0x394): undefined reference to `canonicalize_path'
exec.c:(.text+0x474): undefined reference to `first_path_var_separator'
exec.c:(.text+0x4c3): undefined reference to `join_path_components'
exec.c:(.text+0x4d1): undefined reference to `join_path_components'
exec.c:(.text+0x4d9): undefined reference to `canonicalize_path'
exec.c:(.text+0x552): undefined reference to `join_path_components'
/mnt/hgfs/pginstaller/server/staging/linux-x64/lib//libpgcommon.a(exec.o): In function `set_pglocale_pgservice':
exec.c:(.text+0x620): undefined reference to `get_etc_path'
exec.c:(.text+0x641): undefined reference to `canonicalize_path'
/mnt/hgfs/pginstaller/server/staging/linux-x64/lib//libpgcommon.a(exec.o): In function `find_other_exec':
exec.c:(.text+0x6c1): undefined reference to `last_dir_separator'
exec.c:(.text+0x6cc): undefined reference to `canonicalize_path'
collect2: ld returned 1 exit status
--

$ nm -oA server/staging/linux-x64/lib/libpg* | grep last_dir_separator
server/staging/linux-x64/lib/libpgcommon.a:exec.o:                 U last_dir_separator
server/staging/linux-x64/lib/libpgport.a:path.o:0000000000000070 T last_dir_separator

So, I'm wondering why do we see undefined symbol even though pgport lib is  included in the linking 


--
Sandeep Thakkar

_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Venkata Balaji N | 23 Aug 11:30 2014
Picon

Slony-I Upgrade

Hello All,

We are upgrading our existing Slony-I installations across our production servers.

We are upgrading from version 2.0.3 to version 2.2.3.

We will have our Applications down for an other scheduled activity

Below is our upgrade procedure -
  • Install Slony-I-2.2.3 on all the nodes
  • Issue SLONIK_UPDATE_NODES -c <config file>  from any of the nodes (we do it from our master node)
  • Start the slon processes for all the nodes
Will this be OK to upgrade from 2.0.3 to 2.2.3 ?

Please help us know if we have to do anything else.

Regards,
VBN


_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Venkata Balaji N | 22 Aug 11:30 2014
Picon

Need help with a production issue

Hello Everyone,

We have a situation where  Slony schema on a slave node was accidentally dropped.

Is there any quick way around for this ?

We have a master replicating to 3 slaves and one of the slaves does not have the Slony schema.

- Can we just un-subscribe that slave and resubscribe back to catch up with the sync ?

In anyone faced such situation, please help us with this.

Regards,
VBN
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Jeff Frost | 19 Aug 05:23 2014

Re: undefined symbol: HeapTupleHeaderGetDatum

On Aug 18, 2014, at 5:23 PM, Brian Fehrle
<brianf@...> wrote:

> Hi All, 
> 
> I'm trying to get a slony 2.2.3 cluster up and running on postgres 9.3.5, and am running into the below error:
> 
> [postgres <at> localhost bin]$ ./init_cluster.sh ../etc/slony.cfg
> <stdin>:305: loading of file /usr/pgsql-9.3/share//slony1_base.2.2.3.sql: PGRES_FATAL_ERROR
ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so:
undefined symbol: HeapTupleHeaderGetDatum
> ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so:
undefined symbol: HeapTupleHeaderGetDatum
> 
> Some system info:
> cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 6.5 (Santiago)
> (64 bit)
> 
> rpm -qa | grep slony
> slony1-93-2.2.3-1.rhel6.x86_64
> 
> rpm -qa | grep postgres
> postgresql93-devel-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-libs-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-contrib-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-server-9.3.5-1PGDG.rhel6.x86_64
> 
> I'm suspecting it's an issue with plpgsql itself, but not quite sure yet, wondering if anyone has seen this
recently. 

Any chance you've got more than one set of postgresql packages installed?

What does:

ldconfig /usr/pgsql-9.3/lib/plpgsql.so

return?
Brian Fehrle | 19 Aug 02:23 2014

undefined symbol: HeapTupleHeaderGetDatum

Hi All, 

I'm trying to get a slony 2.2.3 cluster up and running on postgres 9.3.5, and am running into the below error:

[postgres <at> localhost bin]$ ./init_cluster.sh ../etc/slony.cfg
<stdin>:305: loading of file /usr/pgsql-9.3/share//slony1_base.2.2.3.sql: PGRES_FATAL_ERROR ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so: undefined symbol: HeapTupleHeaderGetDatum
ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so: undefined symbol: HeapTupleHeaderGetDatum

Some system info:
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
(64 bit)

rpm -qa | grep slony
slony1-93-2.2.3-1.rhel6.x86_64

rpm -qa | grep postgres
postgresql93-devel-9.3.5-1PGDG.rhel6.x86_64
postgresql93-libs-9.3.5-1PGDG.rhel6.x86_64
postgresql93-9.3.5-1PGDG.rhel6.x86_64
postgresql93-contrib-9.3.5-1PGDG.rhel6.x86_64
postgresql93-server-9.3.5-1PGDG.rhel6.x86_64

I'm suspecting it's an issue with plpgsql itself, but not quite sure yet, wondering if anyone has seen this recently. 

Thanks,
- Brian F
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Soni M | 15 Aug 01:55 2014
Picon

cleanup_interval question

Hello Everyone,

I thought i got those cleanup event every 10 minutes, which is the default.
I desired another value for this parameter, but seems didn't work. I set them to 20 minutes.
How can i change the value for cleanup event on slon process?

config :
cleanup_interval="20 minutes"

related log :
NOTICE:  Slony-I: log switch to sl_log_2 complete - truncate sl_log_1
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 17:46:23 EDTINFO   cleanupThread:    0.060 seconds for cleanupEvent()
NOTICE:  Slony-I: Logswitch to sl_log_1 initiated
CONTEXT:  SQL statement "SELECT "_slony_example".logswitch_start()"
PL/pgSQL function "cleanupevent" line 102 at PERFORM
2014-08-14 17:57:18 EDTINFO   cleanupThread:    0.005 seconds for cleanupEvent()
NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2 not truncated
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 18:08:13 EDTINFO   cleanupThread:    0.043 seconds for cleanupEvent()
NOTICE:  Slony-I: log switch to sl_log_1 complete - truncate sl_log_2
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 18:19:02 EDTINFO   cleanupThread:    0.057 seconds for cleanupEvent()
NOTICE:  Slony-I: Logswitch to sl_log_2 initiated
CONTEXT:  SQL statement "SELECT "_slony_example".logswitch_start()"
PL/pgSQL function "cleanupevent" line 102 at PERFORM
2014-08-14 18:29:52 EDTINFO   cleanupThread:    0.004 seconds for cleanupEvent()
NOTICE:  Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 18:41:15 EDTINFO   cleanupThread:    0.038 seconds for cleanupEvent()

OS : Ubuntu 12.04 LTS
PG : 9.1 from ubuntu package
slony : 2.0.7 from ubuntu package

--
Thanks,

Soni Maula Harriz
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Venkata Balaji N | 6 Aug 06:56 2014
Picon

Slony1-2.2.X binaries for Solaris 10 SPARC

Hello,

Can anyone please help us know if there are Slony-1, preferably 2.X version binaries available for Solaris 10 SPARC ?

Thanks in advance for your help !

Regards,
Venkata B N
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Romain Dessort | 1 Aug 14:15 2014
Picon

Initial sync copies tables infinitely

Hello,

I have a strange problem after adding a third node to my Slony cluster:

Up to now, I have a really simple Slony cluster: one master (node1), and one
slave (node2), which is subscribed to a unique set (set1). Everything
works fine for several years.
But now I have to add a second slave (node3) subscribed to the same set as
node2. So I followed the documentation [1] to configure it and then started the
slon daemon on node3.
I see in the slon log file that tables are correctly copied.
Ok, I wait some days (about 50GB of data to copy), but here is the weird
problem: tables on the new slave seem to grow infinitely, but without data in
it!

Example with the table “affectation”: normal size on master: 149MB.
But on the new slave:

    =# SELECT relname AS "relation", pg_size_pretty(pg_relation_size(C.oid)) AS "size"
      FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
      WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND relname='affectation';
      relation   |  size  
    -------------+--------
     affectation | 22 GB
    (1 row)

But the table is empty:

    =# select * from affectation;
     affectation_id | datedu | dateau | passager | produitchoisi 
    ----------------+--------+--------+----------+---------------
    (0 rows)

If a do a VACUUM on slave, all tables size goes to zero.

In the log file, tables are recopied each 11/12 minutes: 

    # grep '"public"."affectation"' /var/log/slony1/node3-gesto2.log
    […]
    2014-08-01 12:06:14 CEST CONFIG remoteWorkerThread_1: prepare to copy table "public"."affectation"
    2014-08-01 12:06:20 CEST CONFIG remoteWorkerThread_1: copy table "public"."affectation"
    2014-08-01 12:06:20 CEST CONFIG remoteWorkerThread_1: Begin COPY of table "public"."affectation"
    NOTICE:  truncate of "public"."affectation" failed - doing delete
    2014-08-01 12:06:27 CEST CONFIG remoteWorkerThread_1: 124888731 bytes copied for table "public"."affectation"
    2014-08-01 12:06:31 CEST CONFIG remoteWorkerThread_1: 10.932 seconds to copy table "public"."affectation"
    2014-08-01 12:12:23 CEST CONFIG remoteWorkerThread_1: prepare to copy table "public"."affectation"
    2014-08-01 12:12:28 CEST CONFIG remoteWorkerThread_1: copy table "public"."affectation"
    2014-08-01 12:12:28 CEST CONFIG remoteWorkerThread_1: Begin COPY of table "public"."affectation"
    NOTICE:  truncate of "public"."affectation" failed - doing delete
    2014-08-01 12:12:35 CEST CONFIG remoteWorkerThread_1: 124892017 bytes copied for table "public"."affectation"
    2014-08-01 12:12:39 CEST CONFIG remoteWorkerThread_1: 10.954 seconds to copy table "public"."affectation"
    2014-08-01 12:18:33 CEST CONFIG remoteWorkerThread_1: prepare to copy table "public"."affectation"
    2014-08-01 12:18:38 CEST CONFIG remoteWorkerThread_1: copy table "public"."affectation"
    2014-08-01 12:18:38 CEST CONFIG remoteWorkerThread_1: Begin COPY of table "public"."affectation"
    NOTICE:  truncate of "public"."affectation" failed - doing delete
    2014-08-01 12:18:45 CEST CONFIG remoteWorkerThread_1: 124892761 bytes copied for table "public"."affectation"
    2014-08-01 12:18:49 CEST CONFIG remoteWorkerThread_1: 11.409 seconds to copy table "public"."affectation"
    […]

Note the correct table size (124742845 bytes) is copied.

To isolate the problem, I tried to create a new table, add it to a second set and
subscribe it to node3: everything ok, the copy process finish successfully. So
the problem seems to come from the set1 specifically.

All nodes run Slony 2.1.4 and PostgreSQL 9.3.4 under the same configuration.

Do you have any ideas about this problem?
It seems that Slony don't/can't acknowledge that tables are copied and so redo
the copy again and again.

[1] http://slony.info/documentation/2.1/modifyingthings.html#AEN1048

Thanks for any hint about that.
Regards,
--

-- 
Romain Dessort <rdessort <at> evolix.fr> GnuPG: 3072D/724BC532
Evolix − Hébergement et Infogérance Open Source http://www.evolix.fr/
_______________________________________________
Slony1-general mailing list
Slony1-general <at> lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 22 Jul 20:53 2014
Picon

replicating from 9.3 to 8.4

Are there any known issues ?

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Ger Timmens | 17 Jul 23:45 2014

slony 2.2.3 experiences

Hi all,

After upgrading our environment from slony 2.1.4 to slony 2.2.3
we see 'a lot' more slony connections both on master, forwarders
as subscribers.

E.g. on the master we see approx 120 more 'idle' slony connections.
This connections start around 50, groing to 120 over time
(restarting slons, will bring it down and connections will increase again).

This is probably related to the changed failover logic in slony 2.2.x
over 2.1.x where each forwarding note should know about each
other node ?

If so, is there a way to go back to the 2.1.x behaviour (is it
configurable) ?

Thanks,

Ger Timmens

Gmane