Venkata Balaji N | 22 Aug 11:30 2014
Picon

Need help with a production issue

Hello Everyone,

We have a situation where  Slony schema on a slave node was accidentally dropped.

Is there any quick way around for this ?

We have a master replicating to 3 slaves and one of the slaves does not have the Slony schema.

- Can we just un-subscribe that slave and resubscribe back to catch up with the sync ?

In anyone faced such situation, please help us with this.

Regards,
VBN
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Jeff Frost | 19 Aug 05:23 2014

Re: undefined symbol: HeapTupleHeaderGetDatum

On Aug 18, 2014, at 5:23 PM, Brian Fehrle
<brianf@...> wrote:

> Hi All, 
> 
> I'm trying to get a slony 2.2.3 cluster up and running on postgres 9.3.5, and am running into the below error:
> 
> [postgres <at> localhost bin]$ ./init_cluster.sh ../etc/slony.cfg
> <stdin>:305: loading of file /usr/pgsql-9.3/share//slony1_base.2.2.3.sql: PGRES_FATAL_ERROR
ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so:
undefined symbol: HeapTupleHeaderGetDatum
> ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so:
undefined symbol: HeapTupleHeaderGetDatum
> 
> Some system info:
> cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 6.5 (Santiago)
> (64 bit)
> 
> rpm -qa | grep slony
> slony1-93-2.2.3-1.rhel6.x86_64
> 
> rpm -qa | grep postgres
> postgresql93-devel-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-libs-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-contrib-9.3.5-1PGDG.rhel6.x86_64
> postgresql93-server-9.3.5-1PGDG.rhel6.x86_64
> 
> I'm suspecting it's an issue with plpgsql itself, but not quite sure yet, wondering if anyone has seen this
recently. 

Any chance you've got more than one set of postgresql packages installed?

What does:

ldconfig /usr/pgsql-9.3/lib/plpgsql.so

return?
Brian Fehrle | 19 Aug 02:23 2014

undefined symbol: HeapTupleHeaderGetDatum

Hi All, 

I'm trying to get a slony 2.2.3 cluster up and running on postgres 9.3.5, and am running into the below error:

[postgres <at> localhost bin]$ ./init_cluster.sh ../etc/slony.cfg
<stdin>:305: loading of file /usr/pgsql-9.3/share//slony1_base.2.2.3.sql: PGRES_FATAL_ERROR ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so: undefined symbol: HeapTupleHeaderGetDatum
ERROR:  could not load library "/usr/pgsql-9.3/lib/plpgsql.so": /usr/pgsql-9.3/lib/plpgsql.so: undefined symbol: HeapTupleHeaderGetDatum

Some system info:
cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
(64 bit)

rpm -qa | grep slony
slony1-93-2.2.3-1.rhel6.x86_64

rpm -qa | grep postgres
postgresql93-devel-9.3.5-1PGDG.rhel6.x86_64
postgresql93-libs-9.3.5-1PGDG.rhel6.x86_64
postgresql93-9.3.5-1PGDG.rhel6.x86_64
postgresql93-contrib-9.3.5-1PGDG.rhel6.x86_64
postgresql93-server-9.3.5-1PGDG.rhel6.x86_64

I'm suspecting it's an issue with plpgsql itself, but not quite sure yet, wondering if anyone has seen this recently. 

Thanks,
- Brian F
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Soni M | 15 Aug 01:55 2014
Picon

cleanup_interval question

Hello Everyone,

I thought i got those cleanup event every 10 minutes, which is the default.
I desired another value for this parameter, but seems didn't work. I set them to 20 minutes.
How can i change the value for cleanup event on slon process?

config :
cleanup_interval="20 minutes"

related log :
NOTICE:  Slony-I: log switch to sl_log_2 complete - truncate sl_log_1
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 17:46:23 EDTINFO   cleanupThread:    0.060 seconds for cleanupEvent()
NOTICE:  Slony-I: Logswitch to sl_log_1 initiated
CONTEXT:  SQL statement "SELECT "_slony_example".logswitch_start()"
PL/pgSQL function "cleanupevent" line 102 at PERFORM
2014-08-14 17:57:18 EDTINFO   cleanupThread:    0.005 seconds for cleanupEvent()
NOTICE:  Slony-I: log switch to sl_log_1 still in progress - sl_log_2 not truncated
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 18:08:13 EDTINFO   cleanupThread:    0.043 seconds for cleanupEvent()
NOTICE:  Slony-I: log switch to sl_log_1 complete - truncate sl_log_2
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 18:19:02 EDTINFO   cleanupThread:    0.057 seconds for cleanupEvent()
NOTICE:  Slony-I: Logswitch to sl_log_2 initiated
CONTEXT:  SQL statement "SELECT "_slony_example".logswitch_start()"
PL/pgSQL function "cleanupevent" line 102 at PERFORM
2014-08-14 18:29:52 EDTINFO   cleanupThread:    0.004 seconds for cleanupEvent()
NOTICE:  Slony-I: log switch to sl_log_2 still in progress - sl_log_1 not truncated
CONTEXT:  PL/pgSQL function "cleanupevent" line 100 at assignment
2014-08-14 18:41:15 EDTINFO   cleanupThread:    0.038 seconds for cleanupEvent()

OS : Ubuntu 12.04 LTS
PG : 9.1 from ubuntu package
slony : 2.0.7 from ubuntu package

--
Thanks,

Soni Maula Harriz
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Venkata Balaji N | 6 Aug 06:56 2014
Picon

Slony1-2.2.X binaries for Solaris 10 SPARC

Hello,

Can anyone please help us know if there are Slony-1, preferably 2.X version binaries available for Solaris 10 SPARC ?

Thanks in advance for your help !

Regards,
Venkata B N
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Romain Dessort | 1 Aug 14:15 2014
Picon

Initial sync copies tables infinitely

Hello,

I have a strange problem after adding a third node to my Slony cluster:

Up to now, I have a really simple Slony cluster: one master (node1), and one
slave (node2), which is subscribed to a unique set (set1). Everything
works fine for several years.
But now I have to add a second slave (node3) subscribed to the same set as
node2. So I followed the documentation [1] to configure it and then started the
slon daemon on node3.
I see in the slon log file that tables are correctly copied.
Ok, I wait some days (about 50GB of data to copy), but here is the weird
problem: tables on the new slave seem to grow infinitely, but without data in
it!

Example with the table “affectation”: normal size on master: 149MB.
But on the new slave:

    =# SELECT relname AS "relation", pg_size_pretty(pg_relation_size(C.oid)) AS "size"
      FROM pg_class C LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
      WHERE nspname NOT IN ('pg_catalog', 'information_schema') AND relname='affectation';
      relation   |  size  
    -------------+--------
     affectation | 22 GB
    (1 row)

But the table is empty:

    =# select * from affectation;
     affectation_id | datedu | dateau | passager | produitchoisi 
    ----------------+--------+--------+----------+---------------
    (0 rows)

If a do a VACUUM on slave, all tables size goes to zero.

In the log file, tables are recopied each 11/12 minutes: 

    # grep '"public"."affectation"' /var/log/slony1/node3-gesto2.log
    […]
    2014-08-01 12:06:14 CEST CONFIG remoteWorkerThread_1: prepare to copy table "public"."affectation"
    2014-08-01 12:06:20 CEST CONFIG remoteWorkerThread_1: copy table "public"."affectation"
    2014-08-01 12:06:20 CEST CONFIG remoteWorkerThread_1: Begin COPY of table "public"."affectation"
    NOTICE:  truncate of "public"."affectation" failed - doing delete
    2014-08-01 12:06:27 CEST CONFIG remoteWorkerThread_1: 124888731 bytes copied for table "public"."affectation"
    2014-08-01 12:06:31 CEST CONFIG remoteWorkerThread_1: 10.932 seconds to copy table "public"."affectation"
    2014-08-01 12:12:23 CEST CONFIG remoteWorkerThread_1: prepare to copy table "public"."affectation"
    2014-08-01 12:12:28 CEST CONFIG remoteWorkerThread_1: copy table "public"."affectation"
    2014-08-01 12:12:28 CEST CONFIG remoteWorkerThread_1: Begin COPY of table "public"."affectation"
    NOTICE:  truncate of "public"."affectation" failed - doing delete
    2014-08-01 12:12:35 CEST CONFIG remoteWorkerThread_1: 124892017 bytes copied for table "public"."affectation"
    2014-08-01 12:12:39 CEST CONFIG remoteWorkerThread_1: 10.954 seconds to copy table "public"."affectation"
    2014-08-01 12:18:33 CEST CONFIG remoteWorkerThread_1: prepare to copy table "public"."affectation"
    2014-08-01 12:18:38 CEST CONFIG remoteWorkerThread_1: copy table "public"."affectation"
    2014-08-01 12:18:38 CEST CONFIG remoteWorkerThread_1: Begin COPY of table "public"."affectation"
    NOTICE:  truncate of "public"."affectation" failed - doing delete
    2014-08-01 12:18:45 CEST CONFIG remoteWorkerThread_1: 124892761 bytes copied for table "public"."affectation"
    2014-08-01 12:18:49 CEST CONFIG remoteWorkerThread_1: 11.409 seconds to copy table "public"."affectation"
    […]

Note the correct table size (124742845 bytes) is copied.

To isolate the problem, I tried to create a new table, add it to a second set and
subscribe it to node3: everything ok, the copy process finish successfully. So
the problem seems to come from the set1 specifically.

All nodes run Slony 2.1.4 and PostgreSQL 9.3.4 under the same configuration.

Do you have any ideas about this problem?
It seems that Slony don't/can't acknowledge that tables are copied and so redo
the copy again and again.

[1] http://slony.info/documentation/2.1/modifyingthings.html#AEN1048

Thanks for any hint about that.
Regards,
--

-- 
Romain Dessort <rdessort <at> evolix.fr> GnuPG: 3072D/724BC532
Evolix − Hébergement et Infogérance Open Source http://www.evolix.fr/
_______________________________________________
Slony1-general mailing list
Slony1-general <at> lists.slony.info
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 22 Jul 20:53 2014
Picon

replicating from 9.3 to 8.4

Are there any known issues ?

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Ger Timmens | 17 Jul 23:45 2014

slony 2.2.3 experiences

Hi all,

After upgrading our environment from slony 2.1.4 to slony 2.2.3
we see 'a lot' more slony connections both on master, forwarders
as subscribers.

E.g. on the master we see approx 120 more 'idle' slony connections.
This connections start around 50, groing to 120 over time
(restarting slons, will bring it down and connections will increase again).

This is probably related to the changed failover logic in slony 2.2.x
over 2.1.x where each forwarding note should know about each
other node ?

If so, is there a way to go back to the 2.1.x behaviour (is it
configurable) ?

Thanks,

Ger Timmens
Dave Cramer | 16 Jul 18:24 2014
Picon

How to get database profiling with slony

I'm looking to figure out the read/write/transactions per day net of slony.

Log grepping is not an option as the app generates too many logs

Any ideas ?

Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Dave Cramer | 14 Jul 15:16 2014
Picon

Slony over very long distances


How well does this work. Does anyone have some real world experience with this ?


Dave Cramer
_______________________________________________
Slony1-general mailing list
Slony1-general@...
http://lists.slony.info/mailman/listinfo/slony1-general
Steve Singer | 8 Jul 03:54 2014

Slony 2.2.3 released

The Slony team is pleased to announce Slony 2.2.3 the next minor release 
of the Slony 2.2.x series

Slony 2.2.3 includes the following changes

  - Bug 338 - Have ddlScript return a bigint instead of a integer
  - fixing  Deadlock with application during minor version slony upgrade
  - Bug 342 FAILOVER fixes for some multi-node configurations
  - Remove HAVE_POSIX_SIGNALS from config.h
  - Bug 344 Do not abort reading slon config values when an unrecognized 
one is encountered

Slony 2.2.3 can be downloaded from from the following URL

http://main.slony.info/downloads/2.2/source/slony1-2.2.3.tar.bz2

Gmane