Jaime Rivera | 31 May 17:44 2016
Picon
Gravatar

Triggers not being fired with pglogical

Hi,

I have a basic replica with pglogical 1.1 and postgres 9.5 on both servers publisher and subscriber.

I have  triggers on publisher and subscriber tables, but the trigger on subscriber table is not being fired.

Is it possible to fire the trigger with pglogical replication?

Thanks in advance
Thalis Kalfigkopoulos | 31 May 15:49 2016
Picon
Gravatar

Drop/Re-Creating database extremely slow + doesn't lose data

Intention: to drop a database and recreate it.
Expectation: the newly created db should be empty
What happens: dropping is fast, creation is slow, and when I reconnect, all the data objects are still there.

Commands (tried both through command line with dropdb/createdb and through psql)

pgdba <at> template1[[local]:5952] # vacuum full;
VACUUM
Time: 61292.151 ms
pgdba <at> template1[[local]:5952] # \l
                              List of databases
   Name    | Owner | Encoding |   Collate   |    Ctype    | Access privileges
-----------+-------+----------+-------------+-------------+-------------------
 postgres  | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba         +
           |       |          |             |             | pgdba=CTc/pgdba
 template1 | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba         +
           |       |          |             |             | pgdba=CTc/pgdba
(3 rows)

pgdba <at> template1[[local]:5952] # CREATE DATABASE dafodb;
CREATE DATABASE
Time: 35776.047 ms
pgdba <at> template1[[local]:5952] # 


And the corresponding lines from pg_log:

2016-05-31 15:29:46 CEST [4591]: user=pgdba,db=template1,app=psql,client=[local] LOG:  statement: CREATE DATABASE dafodb;
2016-05-31 15:29:46 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint starting: immediate force wait flush-all
2016-05-31 15:29:46 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint complete: wrote 241 buffers (1.5%); 0 transaction log file(s) added, 0 removed, 0 recycled; write=0.002 s, sync=0.035 s, total=0.045 s; sync files=54, longest=0.003 s, average=0.000 s; distance=67120 kB, estimate=67120 kB
2016-05-31 15:29:52 CEST [4596]: user=,db=,app=,client= LOG:  process 4596 still waiting for RowExclusiveLock on object 1 of class 1262 of database 0 after 1000.138 ms
2016-05-31 15:29:52 CEST [4596]: user=,db=,app=,client= DETAIL:  Process holding the lock: 4591. Wait queue: 4596.
2016-05-31 15:30:22 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint starting: immediate force wait
2016-05-31 15:30:22 CEST [3470]: user=,db=,app=,client= LOG:  checkpoint complete: wrote 0 buffers (0.0%); 0 transaction log file(s) added, 4 removed, 0 recycled; write=0.000 s, sync=0.000 s, total=0.004 s; sync files=0, longest=0.000 s, average=0.000 s; distance=0 kB, estimate=60408 kB
2016-05-31 15:30:22 CEST [4591]: user=pgdba,db=template1,app=psql,client=[local] LOG:  duration: 35775.909 ms
2016-05-31 15:30:22 CEST [4596]: user=,db=,app=,client= LOG:  process 4596 acquired RowExclusiveLock on object 1 of class 1262 of database 0 after 31471.839 ms
2016-05-31 15:30:22 CEST [4596]: user=,db=,app=,client= LOG:  automatic vacuum of table "template1.pg_catalog.pg_statistic": index scans: 1
        pages: 0 removed, 54 remain, 0 skipped due to pins
        tuples: 108 removed, 724 remain, 0 are dead but not yet removable
        buffer usage: 106 hits, 39 misses, 62 dirtied
        avg read rate: 2.044 MB/s, avg write rate: 3.250 MB/s
        system usage: CPU 0.00s/0.00u sec elapsed 0.14 sec
2016-05-31 15:30:51 CEST [4614]: user=,db=,app=,client= LOG:  automatic analyze of table "template1.pg_catalog.pg_shdepend" system usage: CPU 0.00s/0.00u sec elapsed 0.02 sec


Then I continue to check the newly created database is there:
pgdba <at> template1[[local]:5952] # \l
                              List of databases
   Name    | Owner | Encoding |   Collate   |    Ctype    | Access privileges
-----------+-------+----------+-------------+-------------+-------------------
 dafodb    | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 postgres  | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 |
 template0 | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba         +
           |       |          |             |             | pgdba=CTc/pgdba
 template1 | pgdba | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/pgdba         +
           |       |          |             |             | pgdba=CTc/pgdba
(4 rows)

pgdba <at> template1[[local]:5952] # \c dafodb
You are now connected to database "dafodb" as user "pgdba".
pgdba <at> dafodb[[local]:5952] # \d
                                 List of relations
 Schema |                         Name                          |   Type   | Owner
--------+-------------------------------------------------------+----------+-------
 public | XXXXXXX                                              | table    | pgdba
 public | YYYYYYYYYYYYY                              | sequence | pgdba
....
....
....

So all the data is still there.

Connected processes at the moment:

pgdba <at> dafodb[[local]:5952] # select * from pg_stat_activity ;
 datid  | datname | pid  | usesysid | usename | application_name | client_addr | client_hostname | client_port |         backend_start         |          xact_start           |          query_start          |         state_change          | waiting | state  | backend_xid
--------+---------+------+----------+---------+------------------+-------------+-----------------+-------------+-------------------------------+-------------------------------+-------------------------------+-------------------------------+---------+--------+------------
 418048 | dafodb  | 4622 |       10 | pgdba   | psql             | (null)      | (null)          |          -1 | 2016-05-31 15:33:16.371456+02 | 2016-05-31 15:34:27.080439+02 | 2016-05-31 15:34:27.080439+02 | 2016-05-31 15:34:27.080442+02 | f       | active |      (null)

(1 row)

Time: 1.072 ms
pgdba <at> dafodb[[local]:5952] # select * from pg_locks ;
  locktype  | database | relation |  page  | tuple  | virtualxid | transactionid | classid | objid  | objsubid | virtualtransaction | pid  |      mode       | granted | fastpath
------------+----------+----------+--------+--------+------------+---------------+---------+--------+----------+--------------------+------+-----------------+---------+----------
 relation   |   418048 |    11673 | (null) | (null) | (null)     |        (null) |  (null) | (null) |   (null) | 3/165              | 4622 | AccessShareLock | t       | t
 virtualxid |   (null) |   (null) | (null) | (null) | 3/165      |        (null) |  (null) | (null) |   (null) | 3/165              | 4622 | ExclusiveLock   | t       | t
(2 rows)


Even weirder, created a new DB with a completely unrelated name. Again "create database" took long time, but then connected to it and it has all the data from the "dafodb".

Also tried: renaming dafodb to dafodb_OLD and again "create database dafodb". Both contain the same data.

All this on Pg 9.5.2 on 64bit Ubuntu with 3.13.0-74-generic.

Any idea what's going on or how to recover?


BR,
Thalis K.
Alexander Farber | 31 May 11:32 2016
Picon
Gravatar

How to hide JDBC connection credentials from git?

Hello,

I work on several small Java projects (using Maven+NetBeans) and store them in a public git repository.

I would like to use PostgreSQL JDBC in some of the projects, but don't want to make the connection credentials of my database public.

Surely there are other developers out there, who have already faced similar problem.

Please share your solutions.

It is probably possible to put the database name/user/password into a *.properties file and then ignore it in git repository, but keep it in the workarea?

Thank you
Alex

CN | 31 May 09:45 2016

Switching roles as an replacement of connection pooling tools

I have a feeling that slight enhancement to commands "SET ROLE" or "SET
SESSION AUTHORIZATION" can obsolete and outperform external connection
pooling tools in some use cases.

Assume we are in the following situation:

- There are a million schemas each owned by a distinct role.
- Every role is not allowed to access any other schema except its own.

If command "SET SESSION AUTHORIZATION" is enhanced to accept two
additional arguments

PASSWORD <password>

, then a client simply establishes only one connection to server and do
jobs for a million roles.

Say I want to gain full access to "schema2", I simply issue these two
commands

SET SESSION AUTHORIZATION user2 PASSWORD p2;
SET SEARCH_PATH TO schema2,pg_category;

, where "p2" is the password associated with role "user2".

If the current role is superuser "postgres" and it wants to downgrade
itself to role "user3", then it simply sends these commands:

SET SESSION AUTHORIZATION user3;
SET SEARCH_PATH TO schema3,pg_category;

Does my points make sense?
Is it eligible for feature request?

Best Regards,
CN

-- 
http://www.fastmail.com - Accessible with your email software
                          or over the web

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Alex Ignatov | 30 May 18:57 2016
Picon

Silent data loss in its pure form

Following this bug reports from redhat https://bugzilla.redhat.com/show_bug.cgi?id=845233

it rising some dangerous issue:

If on any reasons you data file is zeroed after some power loss(it is the most known issue on XFS in the past) when you do
select count(*) from you_table you got zero if you table was in one 1GB(default) file or some other numbers !=count (*) from you_table before power loss
No errors, nothing suspicious in logs. No any checksum errors. Nothing.

Silent data loss is its pure form.

And thanks to all gods that you notice it before backup recycling which contains good data.
Keep in mind it while checking you "backups" in any forms (pg_dump or the more dangerous and short-spoken PITR file backup)

You data is always in danger with "zeroed data file is normal file" paradigm.


-- Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
Daniel Westermann | 30 May 17:35 2016

Deleting a table file does not raise an error when the table is touched afterwards, why?

Hi,

I need to understand something: Lets assume I have a table t5 with 1'000'000 rows:

(postgres <at> [local]:5432) [sample] > select count(*) from t5;
  count  
---------
 1000000
(1 row)

Time: 2363.834 ms
(postgres <at> [local]:5432) [sample] >

I get the file for that table:

postgres <at> pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] oid2name -d sample -t t5
From database "sample":
  Filenode  Table Name
----------------------
     32809          t5


Then I delete the file:

postgres <at> pg_essentials_p1:/u02/pgdata/PG1/base/16422/ [PG1] rm 32809

When doing the count(*) on the table again:

(postgres <at> [local]:5432) [sample] > select count(*) from t5;
  count  
---------
 1000000
(1 row)

No issue in the log. This is probably coming from the cache, isn't it? Is this intended and safe?

Then I restart the instance and do the select again:

2016-05-30 19:25:20.633 CEST - 9 - 2777 -  - <at> FATAL:  could not open file "base/16422/32809": No such file or directory
2016-05-30 19:25:20.633 CEST - 10 - 2777 -  - <at> CONTEXT:  writing block 8192 of relation base/16422/32809

(postgres <at> [local]:5432) [sample] > select count(*) from t5;
 count  
--------
 437920
(1 row)

Can someone please tell me the intention behind that? From my point of view this is dangerous. If nobody is monitoring the log (which sadly is the case in reality) nobody will notice that only parts of the table are there. Wouldn't it be much more safe to raise an error as soon as the table is touched?

PostgreSQL version:

(postgres <at> [local]:5432) [sample] > select version();
-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------
version | PostgreSQL 9.4.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit

Thanks in advance
Daniel

Brian Sutherland | 30 May 15:37 2016
Picon
Gravatar

After replication failover: could not read block X in file Y read only 0 of 8192 bytes

I'm running a streaming replication setup with PostgreSQL 9.5.2 and have
started seeing these errors on a few INSERTs:

    ERROR:  could not read block 8 in file "base/3884037/3885279": read only 0 of 8192 bytes

on a few tables. If I look at that specific file, it's only 6 blocks
long:

    # ls -la base/3884037/3885279
    -rw------- 1 postgres postgres 49152 May 30 12:56 base/3884037/3885279

It seems that this is the case on most tables in this state. I havn't
seen any error on SELECT and I can SELECT * on the all tables I know
have this problem. The database is machine is under reasonable load.

On some tables an "ANALYZE tablename" causes the error.

We recently had a streaming replication failover after loading a large
amount of data with pg_restore. The problems seem to have started after
that, but I'm not perfectly sure.

I have data_checksums switched on so am suspecting a streaming
replication bug.  Anyone know of a recent bug which could have caused
this?

-- 
Brian Sutherland

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

CN | 30 May 11:15 2016

How to find business partners from PostgreSQL communities?

I have a business plan for my product and services both developed on top
of PostgreSQL. I am looking for partners to form a start-up to work on
these product and services.

My ideal candidates are PostgreSQL endorsers. In addition, I hope the
technical details in my plan be exposed  during the discussions in
primitive stages as few as possible to potential competitors, non
PostgreSQL endorsers in particular.

As most predecessors did, I feel my idea unprecedented. However, I also
understand that these days many people, including myself in most cases,
tend to interpret terminologies like "business plan", "idea",
"start-up", "cloud", "big data", etc. into "propaganda", "spam", or
worse - "scam". This is why I not only draft the following targets to
which I might send my solicitations, but also attach obvious concerns to
them:

- PostgreSQL mailing lists
  concerns:
    (a) Will my messages deemed as spam or harassment?
    (b) Which mailing list is appropriate for my messages if it really
    is? "-jobs" is definitely inappropriate because my pocket is empty
    and I am unable to hire anyone.

- manually compiling the names and their associated e-mail addresses of
core developers, hackers, contributers, users, etc.
  concerns:
    (a) This is a very ineffective approach.
    (b) I recall the horrible past that someone harvested years ago the
    e-mail addresses in mailing list archives and used them in a way I
    no longer remember now. That activity caused huge anger from many
    community members.

- LinkedIn private message services
  concern: First I will have to invite many people I do not really know
  to link me in, then I ultimately fall into their black list.

- LinkedIn interest groups
  concern: I have a feeling that many articles and discussions posted in
  the groups I joined are in essence propaganda coated with technology.

- Twitter
  concerns:
    (a) People do not like advertisements. Such messages are most likely
    be ignored.
    (b) I have few followers.

- Requesting for a new PostgreSQL mailing list, which might be called
"-biz-opportunities" or "-biz-partners"
  concern: My messages will be delivered only to few recipients because
  new list will have only quite a few subscribers.

Above all, the last result I want to get from my inquiries is *silence*.

EnterpriseDB and Xtuple are two successful examples I heard of. 
Hopefully I will be able to follow their pattens of success.  I wonder
how they achieved them, such as:

- How did those initiators find and attract in the first place those
individuals who are willing to discuss their grreat plans?
- How did they discuss their plans without fearing too many of their
sensitive plan details exposed before their companies were formed?

I need your enlightenment! What routes are the least offensive, yet most
effective, efficient, and appropriate ones for me to take to have my
awesome or yet another awful "idea" be delivered to those targeted
PostgreSQL endorsers?

Thank you in advance!

Best Regards,
CN

-- 
http://www.fastmail.com - Or how I learned to stop worrying and
                          love email again

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Sridhar N Bamandlapally | 30 May 09:25 2016
Picon

UUID datatype

Hi

Is there a way to implicit SELECT on UUID datatype in uppercase ?

Please

Thanks
Sridhar

Attila Soki | 29 May 22:10 2016
Picon
Picon

plugin dev, oid to pointer map

Hi all,

i am about to begin with postgresql plugin development.
Currently i'm trying to become somewhat familiar with the postgresql sources.

Without going too deep into details about the plugin, i want to use
many Oid to pointer relations.
The pointer is a pointer to my own struct (allocated with palloc).
There will be approx. 1000 unique oid/pointer pairs.

Basically, what i want is, to be able to get the pointer to my struct by Oid.

Is there is a suitable hashmap or key-value storage solution in the pg code?
if so, please point me to the right part of the source.

thanks,
Attila Soki

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Johannes | 29 May 06:36 2016
Picon

recordings of pgconf us 2016

I guess I have seen all video recording from pgconf us 2015 at youtube.
Are there any recording from this year available?

Best regards Johannes


Gmane