Josh berkus | 9 Feb 12:17 2016
Gravatar

Re: no pg_hba.conf entry for replication connection

On 02/09/2016 11:44 AM, Алексей Митропольский wrote:
> Hello,
>
> I have tried to set up streaming replication (Postgres 9.4.5.3 Linux).
>
> pg_hba.conf on the master server have entry:
>
> host replication postgres 192.168.50.36 trust

I don't recommend using "trust" for this.  Use a .pgpass file instead to 
save the password.

>
> When I start the standby server (192.168.50.36) I get errors in the logs:
>
> LOG: entering standby mode
> LOG: restored log file "0000...10" from archive
> LOG: redo starts at 0/10000788
> LOG: consistent recovery state reached at 0/11000000
> LOG: database system is ready to accept read only connections
> LOG: scp: /opt/PostgreSQL/9.4/data/arch_dest/0000..11: No such file or
> directory
> LOG: invalid magic number 0000 in ljg segment 0000..11, offset 0
> FATAL: could not connect to the primary server: FATAL: no pg_hba.conf
> entry for replication connection from host "192.168.50.36",
> user "postgres", SSL off

Did you reload the master?

--

-- 
(Continue reading)

Picon

no pg_hba.conf entry for replication connection

Hello,

I have tried to set up streaming replication (Postgres 9.4.5.3 Linux).

pg_hba.conf on the master server have entry:

host replication postgres 192.168.50.36 trust

When I start the standby server (192.168.50.36) I get errors in the logs:

LOG: entering standby mode
LOG: restored log file "0000...10" from archive
LOG: redo starts at 0/10000788
LOG: consistent recovery state reached at 0/11000000
LOG: database system is ready to accept read only connections
LOG: scp: /opt/PostgreSQL/9.4/data/arch_dest/0000..11: No such file or directory
LOG: invalid magic number 0000 in ljg segment 0000..11, offset 0
FATAL: could not connect to the primary server: FATAL: no pg_hba.conf
entry for replication connection from host "192.168.50.36",
user "postgres", SSL off

Sincerely yours,
Alexey Mitropolsky




Johannes | 8 Feb 20:05 2016
Picon

execute same query only one time?

Hi,

is there a best practice to share data between two select statements?

Imaging following situation: I want to receive two result sets from two
tables, referring to a specific id from table t0 AND I try not to query
for that specific id a second time.

Table t0 returns 1 row and table t1 returns multiple rows.

begin;
select id, col1, col2, ... from t0 where id = (select max(id) from t0
where col1 = value1 and col2 = value2 and ...);
select col1 from t1 where t0_id = (select max(id) from t0 where col1 =
value1 and col2 = value2 and ...);
commit;

Best regards Johannes

Geoff Winkless | 8 Feb 15:39 2016
Picon

COALESCE requires NULL from scalar subquery has a type

Hi

Not an important question, but a niggle.

CREATE TABLE gwtest (id INT PRIMARY KEY); INSERT INTO gwtest VALUES (1),(2),(3);
SELECT COALESCE((SELECT 'Yes' FROM gwtest WHERE id=4), 'No') AS valid;

gives an error

failed to find conversion function from unknown to text

I can work around this with

SELECT COALESCE((SELECT 'Yes' FROM gwtest WHERE id=4)::varchar, 'No') AS valid;

I'm guessing this is because Postgres can't deduce the type of the
string column from the source when the result isn't returned. Oddly,
it also seems to cope when I do:

SELECT COALESCE((SELECT 'Yes'::varchar FROM gwtest WHERE id=4), 'No') AS valid

which _really_ blows my mind.

I'm aware I could use EXISTS instead (and it would be much nicer) and
(as above) I can fix it with an explicit cast, but it seems odd that
a) COALESCE can infer the type from the string when one is returned
and from an explicitly cast string that _isn't_ returned, but can't
infer the type from the non-cast version, and b) it needs a type for
NULL at all (since any NULL is going to be treated the same).

(running 9.5, if it matters)

Am I missing something?

Geoff

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Joshua D. Drake | 8 Feb 18:46 2016

Reminder: PgDay <at> LFNW CFP is closing on the 10th

Hello,

As a reminder the PgDay  <at>  LFNW CFP is closing on the 10th. This is a 
great 2000 person non-profit conference that takes place at a local 
college in Bellingham, WA.

We are still looking for talks! Bring your ideas.

Sincerely,

JD

-- 
Command Prompt, Inc.                  http://the.postgres.company/
                         +1-503-667-4564
PostgreSQL Centered full stack support, consulting and development.
Everyone appreciates your honesty, until you are honest with them.

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Thom Brown | 8 Feb 15:43 2016
Picon
Gravatar

Very slow DELETEs with foreign keys

Hi,

I've just noticed a general delete performance issue while testing a
patch, and this can be recreated on all recent major versions.

I have 2 tables:

CREATE TABLE countries (
    country text PRIMARY KEY,
    continent text
);

CREATE TABLE contacts (
    id serial PRIMARY KEY,
    first_name text,
    last_name text,
    age integer,
    country text REFERENCES countries (country)
);

Here's a sample of the data in the tables:

# SELECT * FROM contacts LIMIT 10;
   id    | first_name | last_name  | age |             country
---------+------------+------------+-----+----------------------------------
 4873919 | Sharon     | Blackburn  |  45 | Indonesia
 4873920 | Ila        | Merrill    |   3 | Zambia
 4873921 | Brian      | Rogers     |  85 | Bahamas
 4873922 | Michelle   | Cunningham |  33 | Malta
 4873923 | Garrett    | Thompson   |  17 | France
 4873924 | Jemima     | Holloway   |  57 | Bahamas
 4873925 | Hector     | Walls      |  82 | Kenya
 4873926 | Evangeline | Copeland   |  57 | Isle of Man
 4873927 | Montana    | Cline      |   9 | Saint Vincent and The Grenadines
 4873928 | Reece      | Albert     |  66 | Virgin Islands, United States
(10 rows)

# SELECT * FROM countries LIMIT 10;
  country   |   continent
------------+---------------
 Albania    | Europe
 Algeria    | Africa
 Andorra    | Europe
 Angola     | Africa
 Anguilla   | Caribbean
 Antarctica | Antarctica
 Argentina  | South America
 Armenia    | Europe
 Aruba      | Caribbean
 Australia  | Australasia
(10 rows)

"contacts" contains 5 million rows
"countries" contains 498 rows

I then ran:

INSERT INTO countries
SELECT country || '1', continent || '2' FROM countries;

to duplicate all the rows, but with a number appended to the values.

But ran into a problem with:

# DELETE FROM countries WHERE continent LIKE '%2';
^CCancel request sent
ERROR:  canceling statement due to user request
CONTEXT:  SQL statement "SELECT 1 FROM ONLY "public"."contacts" x
WHERE $1 OPERATOR(pg_catalog.=) "country" FOR KEY SHARE OF x"
Time: 278560.623 ms

As you can see, I ran out of patience, but before I did, I ran a stack trace:

#0  ExecScan (node=node <at> entry=0x1a97f40,
accessMtd=accessMtd <at> entry=0x661063 <SeqNext>,
recheckMtd=recheckMtd <at> entry=0x661050 <SeqRecheck>) at execScan.c:236
#1  0x000000000066118b in ExecSeqScan (node=node <at> entry=0x1a97f40) at
nodeSeqscan.c:127
#2  0x000000000063ba4a in ExecProcNode (node=node <at> entry=0x1a97f40) at
execProcnode.c:419
#3  0x0000000000658860 in ExecLockRows (node=node <at> entry=0x1a97d50) at
nodeLockRows.c:57
#4  0x000000000063bd67 in ExecProcNode (node=node <at> entry=0x1a97d50) at
execProcnode.c:527
#5  0x0000000000636363 in ExecutePlan (estate=estate <at> entry=0x1a97b70,
planstate=0x1a97d50, use_parallel_mode=0 '\000',
operation=operation <at> entry=CMD_SELECT,
    sendTuples=sendTuples <at> entry=1 '\001',
numberTuples=numberTuples <at> entry=1,
direction=direction <at> entry=ForwardScanDirection,
dest=dest <at> entry=0xde70c0 <spi_printtupDR>)
    at execMain.c:1566
#6  0x00000000006372ac in standard_ExecutorRun (queryDesc=0x1aadbf0,
direction=ForwardScanDirection, count=1) at execMain.c:338
#7  0x0000000000637350 in ExecutorRun
(queryDesc=queryDesc <at> entry=0x1aadbf0,
direction=direction <at> entry=ForwardScanDirection, count=count <at> entry=1)
at execMain.c:286
#8  0x000000000066cdcf in _SPI_pquery
(queryDesc=queryDesc <at> entry=0x1aadbf0,
fire_triggers=fire_triggers <at> entry=0 '\000', tcount=1) at spi.c:2404
#9  0x000000000066f7a3 in _SPI_execute_plan
(plan=plan <at> entry=0x1aad790, paramLI=0x1aadba0,
snapshot=snapshot <at> entry=0x0,
    crosscheck_snapshot=crosscheck_snapshot <at> entry=0x0,
read_only=read_only <at> entry=0 '\000',
fire_triggers=fire_triggers <at> entry=0 '\000', tcount=tcount <at> entry=1)
    at spi.c:2192
#10 0x000000000066fcc8 in SPI_execute_snapshot
(plan=plan <at> entry=0x1aad790, Values=Values <at> entry=0x7ffff20c8e80,
Nulls=Nulls <at> entry=0x7ffff20c8e40 " ",
    snapshot=snapshot <at> entry=0x0,
crosscheck_snapshot=crosscheck_snapshot <at> entry=0x0,
read_only=read_only <at> entry=0 '\000',
fire_triggers=fire_triggers <at> entry=0 '\000',
    tcount=tcount <at> entry=1) at spi.c:489
#11 0x0000000000873f1b in ri_PerformCheck
(riinfo=riinfo <at> entry=0x1a7cb70, qkey=qkey <at> entry=0x7ffff20c9360,
qplan=0x1aad790, fk_rel=fk_rel <at> entry=0x7fcdcde65480,
    pk_rel=pk_rel <at> entry=0x7fcdcde5ee88,
old_tuple=old_tuple <at> entry=0x7ffff20c9940,
new_tuple=new_tuple <at> entry=0x0, detectNewRows=detectNewRows <at> entry=1
'\001',
    expect_OK=expect_OK <at> entry=5) at ri_triggers.c:3142
#12 0x00000000008746e7 in ri_restrict_del (trigdata=<optimized out>,
is_no_action=is_no_action <at> entry=1 '\001') at ri_triggers.c:773
#13 0x0000000000875b07 in RI_FKey_noaction_del (fcinfo=0x7ffff20c9510)
at ri_triggers.c:613
#14 0x00000000006118f0 in ExecCallTriggerFunc
(trigdata=trigdata <at> entry=0x7ffff20c9960, tgindx=tgindx <at> entry=0,
finfo=finfo <at> entry=0x1a01630, instr=instr <at> entry=0x0,
    per_tuple_context=per_tuple_context <at> entry=0x1aa5c50) at trigger.c:1910
#15 0x0000000000612fae in AfterTriggerExecute
(event=event <at> entry=0x1a7cea0, rel=rel <at> entry=0x7fcdcde5ee88,
trigdesc=trigdesc <at> entry=0x1a01490,
    finfo=finfo <at> entry=0x1a01630, instr=instr <at> entry=0x0,
per_tuple_context=per_tuple_context <at> entry=0x1aa5c50,
trig_tuple_slot1=trig_tuple_slot1 <at> entry=0x0,
    trig_tuple_slot2=trig_tuple_slot2 <at> entry=0x0) at trigger.c:3643
#16 0x00000000006132b3 in afterTriggerInvokeEvents
(events=events <at> entry=0x1a8d8a0, firing_id=1,
estate=estate <at> entry=0x1a01240, delete_ok=delete_ok <at> entry=1 '\001')
    at trigger.c:3836
#17 0x0000000000618134 in AfterTriggerEndQuery
(estate=estate <at> entry=0x1a01240) at trigger.c:4008
#18 0x00000000006373d2 in standard_ExecutorFinish
(queryDesc=0x1ab33d0) at execMain.c:411
#19 0x0000000000637449 in ExecutorFinish
(queryDesc=queryDesc <at> entry=0x1ab33d0) at execMain.c:379
#20 0x00000000007af5c6 in ProcessQuery (plan=plan <at> entry=0x1aaa3b8,
sourceText=0x1a4dda0 "delete from countries where continent like
'%2';", params=0x0,
    dest=dest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:225
#21 0x00000000007af73d in PortalRunMulti
(portal=portal <at> entry=0x1aa7720, isTopLevel=isTopLevel <at> entry=1 '\001',
dest=dest <at> entry=0x1aaa498,
    altdest=altdest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:1275
#22 0x00000000007b077e in PortalRun (portal=portal <at> entry=0x1aa7720,
count=count <at> entry=9223372036854775807, isTopLevel=isTopLevel <at> entry=1
'\001',
    dest=dest <at> entry=0x1aaa498, altdest=altdest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:812
#23 0x00000000007ad433 in exec_simple_query
(query_string=query_string <at> entry=0x1a4dda0 "delete from countries
where continent like '%2';") at postgres.c:1094
#24 0x00000000007ae029 in PostgresMain (argc=<optimized out>,
argv=argv <at> entry=0x19e7b98, dbname=0x19e7a48 "postgres",
username=<optimized out>) at postgres.c:4021
#25 0x0000000000730091 in BackendRun (port=port <at> entry=0x1a08150) at
postmaster.c:4258
#26 0x000000000073272c in BackendStartup (port=port <at> entry=0x1a08150)
at postmaster.c:3932
#27 0x0000000000732af2 in ServerLoop () at postmaster.c:1690
#28 0x00000000007345c1 in PostmasterMain (argc=argc <at> entry=3,
argv=argv <at> entry=0x19e6c70) at postmaster.c:1298
#29 0x0000000000689943 in main (argc=3, argv=0x19e6c70) at main.c:223

Then another:

#0  ExecQual (qual=qual <at> entry=0x1a98b60,
econtext=econtext <at> entry=0x1a98050, resultForNull=resultForNull <at> entry=0
'\000') at execQual.c:5263
#1  0x0000000000645821 in ExecScan (node=node <at> entry=0x1a97f40,
accessMtd=accessMtd <at> entry=0x661063 <SeqNext>,
recheckMtd=recheckMtd <at> entry=0x661050 <SeqRecheck>)
    at execScan.c:208
#2  0x000000000066118b in ExecSeqScan (node=node <at> entry=0x1a97f40) at
nodeSeqscan.c:127
#3  0x000000000063ba4a in ExecProcNode (node=node <at> entry=0x1a97f40) at
execProcnode.c:419
#4  0x0000000000658860 in ExecLockRows (node=node <at> entry=0x1a97d50) at
nodeLockRows.c:57
#5  0x000000000063bd67 in ExecProcNode (node=node <at> entry=0x1a97d50) at
execProcnode.c:527
#6  0x0000000000636363 in ExecutePlan (estate=estate <at> entry=0x1a97b70,
planstate=0x1a97d50, use_parallel_mode=0 '\000',
operation=operation <at> entry=CMD_SELECT,
    sendTuples=sendTuples <at> entry=1 '\001',
numberTuples=numberTuples <at> entry=1,
direction=direction <at> entry=ForwardScanDirection,
dest=dest <at> entry=0xde70c0 <spi_printtupDR>)
    at execMain.c:1566
#7  0x00000000006372ac in standard_ExecutorRun (queryDesc=0x1aadbf0,
direction=ForwardScanDirection, count=1) at execMain.c:338
#8  0x0000000000637350 in ExecutorRun
(queryDesc=queryDesc <at> entry=0x1aadbf0,
direction=direction <at> entry=ForwardScanDirection, count=count <at> entry=1)
at execMain.c:286
#9  0x000000000066cdcf in _SPI_pquery
(queryDesc=queryDesc <at> entry=0x1aadbf0,
fire_triggers=fire_triggers <at> entry=0 '\000', tcount=1) at spi.c:2404
#10 0x000000000066f7a3 in _SPI_execute_plan
(plan=plan <at> entry=0x1aad790, paramLI=0x1aadba0,
snapshot=snapshot <at> entry=0x0,
    crosscheck_snapshot=crosscheck_snapshot <at> entry=0x0,
read_only=read_only <at> entry=0 '\000',
fire_triggers=fire_triggers <at> entry=0 '\000', tcount=tcount <at> entry=1)
    at spi.c:2192
#11 0x000000000066fcc8 in SPI_execute_snapshot
(plan=plan <at> entry=0x1aad790, Values=Values <at> entry=0x7ffff20c8e80,
Nulls=Nulls <at> entry=0x7ffff20c8e40 " ",
    snapshot=snapshot <at> entry=0x0,
crosscheck_snapshot=crosscheck_snapshot <at> entry=0x0,
read_only=read_only <at> entry=0 '\000',
fire_triggers=fire_triggers <at> entry=0 '\000',
    tcount=tcount <at> entry=1) at spi.c:489
#12 0x0000000000873f1b in ri_PerformCheck
(riinfo=riinfo <at> entry=0x1a7cb70, qkey=qkey <at> entry=0x7ffff20c9360,
qplan=0x1aad790, fk_rel=fk_rel <at> entry=0x7fcdcde65480,
    pk_rel=pk_rel <at> entry=0x7fcdcde5ee88,
old_tuple=old_tuple <at> entry=0x7ffff20c9940,
new_tuple=new_tuple <at> entry=0x0, detectNewRows=detectNewRows <at> entry=1
'\001',
    expect_OK=expect_OK <at> entry=5) at ri_triggers.c:3142
#13 0x00000000008746e7 in ri_restrict_del (trigdata=<optimized out>,
is_no_action=is_no_action <at> entry=1 '\001') at ri_triggers.c:773
#14 0x0000000000875b07 in RI_FKey_noaction_del (fcinfo=0x7ffff20c9510)
at ri_triggers.c:613
#15 0x00000000006118f0 in ExecCallTriggerFunc
(trigdata=trigdata <at> entry=0x7ffff20c9960, tgindx=tgindx <at> entry=0,
finfo=finfo <at> entry=0x1a01630, instr=instr <at> entry=0x0,
    per_tuple_context=per_tuple_context <at> entry=0x1aa5c50) at trigger.c:1910
#16 0x0000000000612fae in AfterTriggerExecute
(event=event <at> entry=0x1a7ceb8, rel=rel <at> entry=0x7fcdcde5ee88,
trigdesc=trigdesc <at> entry=0x1a01490,
    finfo=finfo <at> entry=0x1a01630, instr=instr <at> entry=0x0,
per_tuple_context=per_tuple_context <at> entry=0x1aa5c50,
trig_tuple_slot1=trig_tuple_slot1 <at> entry=0x0,
    trig_tuple_slot2=trig_tuple_slot2 <at> entry=0x0) at trigger.c:3643
#17 0x00000000006132b3 in afterTriggerInvokeEvents
(events=events <at> entry=0x1a8d8a0, firing_id=1,
estate=estate <at> entry=0x1a01240, delete_ok=delete_ok <at> entry=1 '\001')
    at trigger.c:3836
#18 0x0000000000618134 in AfterTriggerEndQuery
(estate=estate <at> entry=0x1a01240) at trigger.c:4008
#19 0x00000000006373d2 in standard_ExecutorFinish
(queryDesc=0x1ab33d0) at execMain.c:411
#20 0x0000000000637449 in ExecutorFinish
(queryDesc=queryDesc <at> entry=0x1ab33d0) at execMain.c:379
#21 0x00000000007af5c6 in ProcessQuery (plan=plan <at> entry=0x1aaa3b8,
sourceText=0x1a4dda0 "delete from countries where continent like
'%2';", params=0x0,
    dest=dest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:225
#22 0x00000000007af73d in PortalRunMulti
(portal=portal <at> entry=0x1aa7720, isTopLevel=isTopLevel <at> entry=1 '\001',
dest=dest <at> entry=0x1aaa498,
    altdest=altdest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:1275
#23 0x00000000007b077e in PortalRun (portal=portal <at> entry=0x1aa7720,
count=count <at> entry=9223372036854775807, isTopLevel=isTopLevel <at> entry=1
'\001',
    dest=dest <at> entry=0x1aaa498, altdest=altdest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:812
#24 0x00000000007ad433 in exec_simple_query
(query_string=query_string <at> entry=0x1a4dda0 "delete from countries
where continent like '%2';") at postgres.c:1094
#25 0x00000000007ae029 in PostgresMain (argc=<optimized out>,
argv=argv <at> entry=0x19e7b98, dbname=0x19e7a48 "postgres",
username=<optimized out>) at postgres.c:4021
#26 0x0000000000730091 in BackendRun (port=port <at> entry=0x1a08150) at
postmaster.c:4258
#27 0x000000000073272c in BackendStartup (port=port <at> entry=0x1a08150)
at postmaster.c:3932
#28 0x0000000000732af2 in ServerLoop () at postmaster.c:1690
#29 0x00000000007345c1 in PostmasterMain (argc=argc <at> entry=3,
argv=argv <at> entry=0x19e6c70) at postmaster.c:1298
#30 0x0000000000689943 in main (argc=3, argv=0x19e6c70) at main.c:223

Then another:

#0  SeqNext (node=node <at> entry=0x1a97f40) at nodeSeqscan.c:62
#1  0x00000000006457a6 in ExecScanFetch (recheckMtd=0x661050
<SeqRecheck>, accessMtd=0x661063 <SeqNext>, node=0x1a97f40) at
execScan.c:95
#2  ExecScan (node=node <at> entry=0x1a97f40,
accessMtd=accessMtd <at> entry=0x661063 <SeqNext>,
recheckMtd=recheckMtd <at> entry=0x661050 <SeqRecheck>) at execScan.c:180
#3  0x000000000066118b in ExecSeqScan (node=node <at> entry=0x1a97f40) at
nodeSeqscan.c:127
#4  0x000000000063ba4a in ExecProcNode (node=node <at> entry=0x1a97f40) at
execProcnode.c:419
#5  0x0000000000658860 in ExecLockRows (node=node <at> entry=0x1a97d50) at
nodeLockRows.c:57
#6  0x000000000063bd67 in ExecProcNode (node=node <at> entry=0x1a97d50) at
execProcnode.c:527
#7  0x0000000000636363 in ExecutePlan (estate=estate <at> entry=0x1a97b70,
planstate=0x1a97d50, use_parallel_mode=0 '\000',
operation=operation <at> entry=CMD_SELECT,
    sendTuples=sendTuples <at> entry=1 '\001',
numberTuples=numberTuples <at> entry=1,
direction=direction <at> entry=ForwardScanDirection,
dest=dest <at> entry=0xde70c0 <spi_printtupDR>)
    at execMain.c:1566
#8  0x00000000006372ac in standard_ExecutorRun (queryDesc=0x1aadbf0,
direction=ForwardScanDirection, count=1) at execMain.c:338
#9  0x0000000000637350 in ExecutorRun
(queryDesc=queryDesc <at> entry=0x1aadbf0,
direction=direction <at> entry=ForwardScanDirection, count=count <at> entry=1)
at execMain.c:286
#10 0x000000000066cdcf in _SPI_pquery
(queryDesc=queryDesc <at> entry=0x1aadbf0,
fire_triggers=fire_triggers <at> entry=0 '\000', tcount=1) at spi.c:2404
#11 0x000000000066f7a3 in _SPI_execute_plan
(plan=plan <at> entry=0x1aad790, paramLI=0x1aadba0,
snapshot=snapshot <at> entry=0x0,
    crosscheck_snapshot=crosscheck_snapshot <at> entry=0x0,
read_only=read_only <at> entry=0 '\000',
fire_triggers=fire_triggers <at> entry=0 '\000', tcount=tcount <at> entry=1)
    at spi.c:2192
#12 0x000000000066fcc8 in SPI_execute_snapshot
(plan=plan <at> entry=0x1aad790, Values=Values <at> entry=0x7ffff20c8e80,
Nulls=Nulls <at> entry=0x7ffff20c8e40 " ",
    snapshot=snapshot <at> entry=0x0,
crosscheck_snapshot=crosscheck_snapshot <at> entry=0x0,
read_only=read_only <at> entry=0 '\000',
fire_triggers=fire_triggers <at> entry=0 '\000',
    tcount=tcount <at> entry=1) at spi.c:489
#13 0x0000000000873f1b in ri_PerformCheck
(riinfo=riinfo <at> entry=0x1a7cb70, qkey=qkey <at> entry=0x7ffff20c9360,
qplan=0x1aad790, fk_rel=fk_rel <at> entry=0x7fcdcde65480,
    pk_rel=pk_rel <at> entry=0x7fcdcde5ee88,
old_tuple=old_tuple <at> entry=0x7ffff20c9940,
new_tuple=new_tuple <at> entry=0x0, detectNewRows=detectNewRows <at> entry=1
'\001',
    expect_OK=expect_OK <at> entry=5) at ri_triggers.c:3142
#14 0x00000000008746e7 in ri_restrict_del (trigdata=<optimized out>,
is_no_action=is_no_action <at> entry=1 '\001') at ri_triggers.c:773
#15 0x0000000000875b07 in RI_FKey_noaction_del (fcinfo=0x7ffff20c9510)
at ri_triggers.c:613
#16 0x00000000006118f0 in ExecCallTriggerFunc
(trigdata=trigdata <at> entry=0x7ffff20c9960, tgindx=tgindx <at> entry=0,
finfo=finfo <at> entry=0x1a01630, instr=instr <at> entry=0x0,
    per_tuple_context=per_tuple_context <at> entry=0x1aa5c50) at trigger.c:1910
#17 0x0000000000612fae in AfterTriggerExecute
(event=event <at> entry=0x1a7d050, rel=rel <at> entry=0x7fcdcde5ee88,
trigdesc=trigdesc <at> entry=0x1a01490,
    finfo=finfo <at> entry=0x1a01630, instr=instr <at> entry=0x0,
per_tuple_context=per_tuple_context <at> entry=0x1aa5c50,
trig_tuple_slot1=trig_tuple_slot1 <at> entry=0x0,
    trig_tuple_slot2=trig_tuple_slot2 <at> entry=0x0) at trigger.c:3643
#18 0x00000000006132b3 in afterTriggerInvokeEvents
(events=events <at> entry=0x1a8d8a0, firing_id=1,
estate=estate <at> entry=0x1a01240, delete_ok=delete_ok <at> entry=1 '\001')
    at trigger.c:3836
#19 0x0000000000618134 in AfterTriggerEndQuery
(estate=estate <at> entry=0x1a01240) at trigger.c:4008
#20 0x00000000006373d2 in standard_ExecutorFinish
(queryDesc=0x1ab33d0) at execMain.c:411
#21 0x0000000000637449 in ExecutorFinish
(queryDesc=queryDesc <at> entry=0x1ab33d0) at execMain.c:379
#22 0x00000000007af5c6 in ProcessQuery (plan=plan <at> entry=0x1aaa3b8,
sourceText=0x1a4dda0 "delete from countries where continent like
'%2';", params=0x0,
    dest=dest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:225
#23 0x00000000007af73d in PortalRunMulti
(portal=portal <at> entry=0x1aa7720, isTopLevel=isTopLevel <at> entry=1 '\001',
dest=dest <at> entry=0x1aaa498,
    altdest=altdest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:1275
#24 0x00000000007b077e in PortalRun (portal=portal <at> entry=0x1aa7720,
count=count <at> entry=9223372036854775807, isTopLevel=isTopLevel <at> entry=1
'\001',
    dest=dest <at> entry=0x1aaa498, altdest=altdest <at> entry=0x1aaa498,
completionTag=completionTag <at> entry=0x7ffff20c9d20 "DELETE 249") at
pquery.c:812
#25 0x00000000007ad433 in exec_simple_query
(query_string=query_string <at> entry=0x1a4dda0 "delete from countries
where continent like '%2';") at postgres.c:1094
#26 0x00000000007ae029 in PostgresMain (argc=<optimized out>,
argv=argv <at> entry=0x19e7b98, dbname=0x19e7a48 "postgres",
username=<optimized out>) at postgres.c:4021
#27 0x0000000000730091 in BackendRun (port=port <at> entry=0x1a08150) at
postmaster.c:4258
#28 0x000000000073272c in BackendStartup (port=port <at> entry=0x1a08150)
at postmaster.c:3932
#29 0x0000000000732af2 in ServerLoop () at postmaster.c:1690
#30 0x00000000007345c1 in PostmasterMain (argc=argc <at> entry=3,
argv=argv <at> entry=0x19e6c70) at postmaster.c:1298
#31 0x0000000000689943 in main (argc=3, argv=0x19e6c70) at main.c:223

I was curious how long this would take to run to completion, and it
turned out to be 5m 8s.

This is just trying to delete rows from a table which aren't
referenced by any rows on the other end of the foreign key constraint.

So what this is doing is, for each country matched, it's running a
query on the contacts table like so:

SELECT 1 FROM ONLY "public"."contacts" x WHERE 'Albania1'
OPERATOR(pg_catalog.=) "country" FOR KEY SHARE OF x;

This individual query takes 850ms, and it has to do this 249 times (as
it will only match half the table because of LIKE '%2'.

Is there something that can be done to improve this?  If the
referenced table had tens of thousands of rows in, the query might as
well never complete.  Even with an index, it still requires running a
query for every matched row in the referenced table.

Thom

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Geoff Winkless | 8 Feb 15:42 2016

COALESCE requires NULL from scalar subquery has a type

Hi

Not an important question, but a niggle.

CREATE TABLE gwtest (id INT PRIMARY KEY); INSERT INTO gwtest VALUES (1),(2),(3);
SELECT COALESCE((SELECT 'Yes' FROM gwtest WHERE id=4), 'No') AS valid;

gives an error

failed to find conversion function from unknown to text

I can work around this with

SELECT COALESCE((SELECT 'Yes' FROM gwtest WHERE id=4)::varchar, 'No') AS valid;

I'm guessing this is because Postgres can't deduce the type of the
string column from the source when the result isn't returned. Oddly,
it also seems to cope when I do:

SELECT COALESCE((SELECT 'Yes'::varchar FROM gwtest WHERE id=4), 'No') AS valid

which _really_ blows my mind.

I'm aware I could use EXISTS instead (and it would be much nicer) and
(as above) I can fix it with an explicit cast, but it seems odd that
a) COALESCE can infer the type from the string when one is returned
and from an explicitly cast string that _isn't_ returned, but can't
infer the type from the non-cast version, and b) it needs a type for
NULL at all (since any NULL is going to be treated the same).

(running 9.5, if it matters)

Am I missing something?

Geoff

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

benjamin barber | 8 Feb 11:04 2016
Picon

Re: Code of Conduct plan

The Code of Conducts basically amount to a "code of wrongthink". This can be best described when some of their advocates, like for example in the Node project make respositories called "mansplain" and "misandry", or when speakers at OSCON are caught with mugs reading "male tears" and using the "#killallmen" hashtag, and are ironically ignored when you report these matters whatsoever.


More importantly many in the industry have taken af "extend, embrace, extinguish" mentality to open source, projects that have been initially open source have become monetized and controlled by organizations, which have no intention of allowing open governance or control over the code base, and use these sort of "code of conducts" to label dissenting opinions as "toxic" or heretical. 

I'd also like to mention that blacklisting is generally considered illegal, and I consider the application of sanctioned discrimination, even politically correct forms of it as illegal as well. 

"The Hollywood blacklist—as the broader entertainment industry blacklist is generally known—was the practice of denying employment to screenwriters, actors, directors, musicians, and other American entertainment professionals during the mid-20th century because of their suspected Communist sympathy or membership in the Communist Party."

"John Henry Faulk won his lawsuit in 1962. With this court decision, the private blacklisters and those who used them were put on notice that they were legally liable for the professional and financial damage they caused. This helped to bring an end to publications such as Counterattack"
Joshua Berkus | 8 Feb 10:13 2016
Gravatar

Re: log_min_duration question


----- Original Message -----
> From the 9.5 docs:
> 
>   log_min_duration_statement (integer)
> 
>   Causes the duration of each completed statement to be logged if the
>   statement ran for at least the specified number of
>   milliseconds. Setting this to zero prints all statement
>   durations. Minus-one (the default) disables logging statement
>   durations. For example, if you set it to 250ms then all SQL statements
>   that run 250ms or longer will be logged.
> 
> While the first sentence explicitly states a fixed unit (namely msec),
> the last sentence seems to imply a (maybe optional) unit suffix
> (ms in this case).  Which one is true?
> 
> How would e.g. "SET log_min_duration_statement=1min" be interpreted?

As one minute.

> 
> I'd prefer a docs clarification.

# log all statements which take more than 100ms
log_min_duration_statement = 100

# log all statements which take more than 30 seconds
log_min_duration_statement = 30s

All of the GUCS which take time have both a base unit (ms, s) or they accept a time abbreviation (ms, s, min, h).

-- 
Josh Berkus
Red Hat OSAS
(opinions are my own)

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Harald Fuchs | 8 Feb 09:25 2016
X-Face
Picon

log_min_duration question

From the 9.5 docs:

  log_min_duration_statement (integer)

  Causes the duration of each completed statement to be logged if the
  statement ran for at least the specified number of
  milliseconds. Setting this to zero prints all statement
  durations. Minus-one (the default) disables logging statement
  durations. For example, if you set it to 250ms then all SQL statements
  that run 250ms or longer will be logged.

While the first sentence explicitly states a fixed unit (namely msec),
the last sentence seems to imply a (maybe optional) unit suffix
(ms in this case).  Which one is true?

How would e.g. "SET log_min_duration_statement=1min" be interpreted?

I'd prefer a docs clarification.

--

-- 
Sent via pgsql-general mailing list (pgsql-general@...)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Michael Holzman | 8 Feb 07:24 2016
Picon

FDW and transaction management

Greetings,

I am trying to understand the subject. I see in the documentation (http://www.postgresql.org/docs/current/static/postgres-fdw.html) that

F.31.3. Transaction Management

During a query that references any remote tables on a foreign server, postgres_fdw opens a transaction on the remote server if one is not already open corresponding to the current local transaction. The remote transaction is committed or aborted when the local transaction commits or aborts. Savepoints are similarly managed by creating corresponding remote savepoints.

The remote transaction uses SERIALIZABLE isolation level when the local transaction has SERIALIZABLE isolation level; otherwise it uses REPEATABLE READ isolation level. This choice ensures that if a query performs multiple table scans on the remote server, it will get snapshot-consistent results for all the scans. A consequence is that successive queries within a single transaction will see the same data from the remote server, even if concurrent updates are occurring on the remote server due to other activities. That behavior would be expected anyway if the local transaction uses SERIALIZABLE or REPEATABLE READ isolation level, but it might be surprising for a READ COMMITTED local transaction. A future PostgreSQL release might modify these rules.


Unfortunately, the Postgres Wiki (https://wiki.postgresql.org/wiki/SQL/MED#No_transaction_management) states quite the opposite:

No transaction management

FDW for PostgreSQL never emit transaction command such as BEGIN, ROLLBACK and COMMIT. Thus, all SQL statements are executed in each transaction when 'autocommit' was set to 'on'.


What is the correct state of the subject?


--
Regards,
    Michael Holzman

Gmane