Andrew Becker | 31 Oct 23:00 2014
Picon

LOADing functions

Hi - I seem to be unable to reLOAD a shared library within the session that I LOADed it.  I am developing a UDF and my debugging changes do not appear to take until I quit psql and restart.  The following code sequence demonstrates the issue

kbmod=# CREATE OR REPLACE FUNCTION hello(TEXT) RETURNS TEXT AS '/Users/acbecker/src/github/kbmod/src/hello.so', 'hello' LANGUAGE C STRICT; kbmod=# SELECT hello( name ) FROM test; hello ---------------- Hello4, Xavier Hello4, Yari Hello4, Zack (3 rows) ### HERE I MODIFY hello.c TO TYPE "Hello5" AND REBUILD hello.so ### DROPPING AND RECREATING THE FUNCTION DOES NOT WORK kbmod=# DROP FUNCTION hello(TEXT); kbmod=# CREATE OR REPLACE FUNCTION hello(TEXT) RETURNS TEXT AS '/Users/acbecker/src/github/kbmod/src/hello.so', 'hello' LANGUAGE C STRICT; kbmod=# SELECT hello( name ) FROM test; hello --------------- Hello4, Xavier Hello4, Yari Hello4, Zack ### RELOADING THE SHARED LIBRARY ALSO DOES NOT HELP kbmod=# LOAD '/Users/acbecker/src/github/kbmod/src/hello.so'; LOAD kbmod=# SELECT hello( name ) FROM test; hello ---------------- Hello4, Xavier Hello4, Yari Hello4, Zack (3 rows) ### HOWEVER, AFTER QUITTING AND RESTARTING, THE CHANGE TAKES kbmod=# \q prompt>:/opt/local/lib/postgresql93/bin/psql -U postgres -d kbmod psql (9.3.5) Type "help" for help. kbmod=# SELECT hello( name ) FROM test; hello ---------------- Hello5, Xavier Hello5, Yari Hello5, Zack (3 rows)

Hints as to what is going wrong here?  I would certainly expect to be able to re-load a shared library while debugging my UDF.

Thanks,
Andy

psql 9.3.5 from Macports
on OS X 10.10
Anil Menon | 31 Oct 11:24 2014
Picon

Basic question regarding insert

Hi,

I have a very basic question on inserts - I tried to get a good authoritative answer but could not really find one to my satisfaction in the usual places.

TLDR : can (after) insert trigger be run in parallel?

Assume an OLTP environment were a lots of inserts are happening to a transaction (TX) table. These inserts are very small - inserts 3 cols only. No deletes and updates happen to this table.

All inserts to this TX table raise a after insert trigger. The trigger inserts 1 to N new records to a (audit) table AUD. On this table too no deletes and updates are performed.

Most of the time (99% of the time) only 1 record is inserted into AUD table by the trigger for an insert into TX (called an "event tx") , but the other 1% of the time 1000s of new records are inserted into the AUD table for the insert of TX (called "correlate tx").

My question is : does the trigger block new inserts into the TX table till it completes its execution? Or are new inserts to the TX table ( and its trigger) queued up and executed sequentially? or in parallel?

i.e Assume a trigger takes 1000ms to complete, if during the 1000ms it takes to complete the trigger code, if X new records are to be inserted into TX table which raises X new triggers, are these X records blocked till the current trigger completes and then executed one-by-one?

Background : a message queue (MQ) interceptor receives MQ messages in the queue and inserts these messages into the TX table. Most messages are event messages which inserts 1 record in the AUD table through an trigger. One of the message is a correlate message  and this requires processing. It performs a query on the received messages and performs a correlation analysis (an single analytical query - thank you Postgres!). The resultset in saved into the AUD table and this could be a few 100 to 1000s of records. Multiple instances of the MQ interceptor can be run - this would be like a new session to the database

Thanks in advance
Anil

Jayadevan M | 31 Oct 08:46 2014
Picon

Query planning

Hi all,

I am going through PostgreSQL documentation on how row estimation is done in PostgreSQL.
http://www.postgresql.org/docs/current/static/row-estimation-examples.html
There is a reference to cardinality (rel_cardinality) there. But that is an estimate of the number of rows in the table, right?

How many unique values are there in a column (cardinality), that is estimated while finding out selectivity?

Regards,
Jayadevan

Torsten Förtsch | 30 Oct 20:02 2014
Picon
Picon

Planet Postgres

Hi,

I don't know if this is the right mailing list for my question but I
have already emailed planet <at> postgresql.org without any response.

What is the correct way to get a blog listed on planet postgres?

The documented way does not work. I registered my blog on Monday. It
still is in "Not approved yet" state. Yesterday I asked planet <at> ... no
response.

So, what did I wrong?

Thanks,
Torsten

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Mike Blackwell | 30 Oct 19:52 2014
Picon

Estimating WAL usage during pg_basebackup

I need to get an idea of how much WAL space will be required during a long (many hours) pg_basebackup over a relatively slow network connection.  This is for a server that's not yet running PITR / streaming.

Any thoughts?
David G Johnston | 30 Oct 19:17 2014
Picon

Re: syntax error while using pg_dump in Postgres 8.4

The psql console still operates mainly at the SQL level of input.  You need to use your OS's shell to run pg_dump.

Both psql and pg_dump are the same kind of application and are run in a similar fashion.

David J.


On Thu, Oct 30, 2014 at 11:13 AM, CrabbeS [via PostgreSQL] <[hidden email]> wrote:
ok That helps...

I am now trying this in psql console- think this should work... (still getting some errors- but will try to figure it out)

Thank you very much !
Sarah

If you reply to this email, your message will be added to the discussion below:
http://postgresql.1045698.n5.nabble.com/syntax-error-while-using-pg-dump-in-Postgres-8-4-tp5825054p5825071.html
To unsubscribe from syntax error while using pg_dump in Postgres 8.4, click here.
NAML


View this message in context: Re: syntax error while using pg_dump in Postgres 8.4
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
David G Johnston | 30 Oct 17:52 2014
Picon

Re: syntax error while using pg_dump in Postgres 8.4

CrabbeS wrote
> I am trying to copy a table form one database to another by using this
> command in the SQL-query builder:
> pg_dump TA2000 -t from Forest -c -s | psql -h localhost postgis;

pg_dump is a standalone application (i.e., something you run in a
shell/command-line) and not an SQL command.

Pipes (the "|" in your string) are also shell/command-line only and cannot
be used where SQL is expected.

David J.

--
View this message in context: http://postgresql.1045698.n5.nabble.com/syntax-error-while-using-pg-dump-in-Postgres-8-4-tp5825054p5825056.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Karsten Hilbert | 30 Oct 10:30 2014
Picon
Picon

PostgreSQL on tablet grade SSD ?

Hi !

Can anyone share any experiences with running PostgreSQL on a
tablet ?  (Surface Pro 3, ASUS Transformer)

(or point to things I should read, searching the web didn't
 turn up anything that seemed relevant to *tablets*)

Thanks,
Karsten
-- 
GPG key ID E4071346  <at>  eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD  4537 78B9 A9F9 E407 1346

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Fabio Ugo Venchiarutti | 30 Oct 07:12 2014

Unexpected planner behavior with *_pattern_ops index matching

Greetings.

I'm in the process of indexing a virtual file system (on 9.2.9, build 
info below) and I ran into what I perceive as an inconsistency in the 
way index viability is assessed by the planner.

Perhaps I'm misinterpreting the docs, but it seems like stable functions 
don't behave as per 
http://www.postgresql.org/docs/9.2/static/xfunc-volatility.html when 
*_pattern_ops operator classes are used (not sure about others).

Steps I followed to reproduce the anomaly:

geoop_prototype=# -- Build/platform Info:
geoop_prototype=# SELECT version();
                                                    version 

--------------------------------------------------------------------------------------------------------------
  PostgreSQL 9.2.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 
4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit
(1 row)

geoop_prototype=# 
----------------------------------------------------------------
geoop_prototype=#
geoop_prototype=# -- Small sample of data volume/distribution from the 
involved table and column.
geoop_prototype=# SELECT COUNT(0) FROM inode_segments WHERE (full_path 
IS NOT NULL);
  count
--------
  291019
(1 row)

geoop_prototype=# SELECT char_length(full_path) FROM inode_segments 
WHERE (full_path IS NOT NULL) ORDER BY random() LIMIT 10;
  char_length
-------------
           80
          126
          108
           75
          116
           71
           70
           76
          137
          113
(10 rows)

geoop_prototype=# 
----------------------------------------------------------------
geoop_prototype=#
geoop_prototype=# -- Query plan without the operator class-specific 
index. As expected the left anchored regex prompts a table scan
geoop_prototype=# EXPLAIN SELECT st_ino FROM inode_segments WHERE 
full_path ~ '^/THIS/MATCHES/NOTHING/';
                               QUERY PLAN
-----------------------------------------------------------------------
  Seq Scan on "inode_segments"  (cost=0.00..27401.85 rows=29 width=8)
    Filter: (("full_path")::"text" ~ '^/THIS/MATCHES/NOTHING/'::"text")
(2 rows)

geoop_prototype=# 
----------------------------------------------------------------
geoop_prototype=#
geoop_prototype=# -- We now define an ad-hoc index
geoop_prototype=# CREATE INDEX ix_inode_segments_filter_by_subtree ON 
gorfs.inode_segments USING BTREE(full_path varchar_pattern_ops);
CREATE INDEX
geoop_prototype=# 
----------------------------------------------------------------
geoop_prototype=#
geoop_prototype=# -- Same query as above. Predictably, the index is now 
being scanned instead
geoop_prototype=# EXPLAIN SELECT st_ino FROM inode_segments WHERE 
full_path ~ '^/THIS/MATCHES/NOTHING/';

QUERY PLAN 

------------------------------------------------------------------------------------------------------------------------------------------------
  Index Scan using "ix_inode_segments_filter_by_subtree" on 
"inode_segments"  (cost=0.00..8.49 rows=29 width=8)
    Index Cond: ((("full_path")::"text" ~>=~ 
'/THIS/MATCHES/NOTHING/'::"text") AND (("full_path")::"text" ~<~ 
'/THIS/MATCHES/NOTHING0'::"text"))
    Filter: (("full_path")::"text" ~ '^/THIS/MATCHES/NOTHING/'::"text")
(3 rows)

geoop_prototype=# 
----------------------------------------------------------------
geoop_prototype=#
geoop_prototype=# -- However, when the comparison value comes from a 
function that is marked as STABLE, the planner reverts to a full scan
geoop_prototype=# EXPLAIN SELECT st_ino FROM inode_segments WHERE 
full_path ~ CONCAT('^/THIS/MATCHES/NOTHING/');
                                QUERY PLAN
-------------------------------------------------------------------------
  Seq Scan on "inode_segments"  (cost=0.00..28789.02 rows=29 width=8)
    Filter: (("full_path")::"text" ~ "concat"('^/THIS/MATCHES/NOTHING/'))
(2 rows)

geoop_prototype=# 
----------------------------------------------------------------
geoop_prototype=#
geoop_prototype=# -- Immutable functions are not affected...
geoop_prototype=# EXPLAIN SELECT st_ino FROM inode_segments WHERE 
full_path ~ UPPER('^/THIS/MATCHES/NOTHING/');

QUERY PLAN 

------------------------------------------------------------------------------------------------------------------------------------------------
  Index Scan using "ix_inode_segments_filter_by_subtree" on 
"inode_segments"  (cost=0.00..8.49 rows=29 width=8)
    Index Cond: ((("full_path")::"text" ~>=~ 
'/THIS/MATCHES/NOTHING/'::"text") AND (("full_path")::"text" ~<~ 
'/THIS/MATCHES/NOTHING0'::"text"))
    Filter: (("full_path")::"text" ~ '^/THIS/MATCHES/NOTHING/'::"text")
(3 rows)

----------------------------------------------------------------
geoop_prototype=# -- ... nor are other operator classes (with 
preexisting index. Note that CONCAT is again being used here)
geoop_prototype=# EXPLAIN SELECT st_ino FROM inode_segments WHERE 
full_path > CONCAT('/THIS/MATCHES/NOTHINA/');
                                                QUERY PLAN 

---------------------------------------------------------------------------------------------------------
  Index Scan using "uc_no_duplicate_full_paths" on "inode_segments" 
(cost=0.00..167.32 rows=418 width=8)
    Index Cond: (("full_path")::"text" > '/THIS/MATCHES/NOTHINA/'::"text")
(2 rows)

As you can see, CONCAT()'s output isn't deemed suitable for an index 
scan. The same happens for all type-compatible STABLE functions i tried.

Am I missing something here?

TIA and Regards

Fabio Venchiarutti

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Picon

undefined struct 'pg_conn' on Windows7

Hello!
I install postgre from postgresql-9.3.5-3-windows-x64.exe file and try 
use libpq.lib from it.
My system Windows7 ServicePack1 and I use VisualStudioExpress 12 for 
write code.
I add path to <postgre_install_dir>\include and 
<postgre_install_dir>\lib in project and include libpq-fe.h file.
Unfortunately when I try build project I get error "uses undefined 
struct 'pg_conn'".
How can I solve this problem?
Thank you and excuse me for my bad english.

-- 
Best regards,
Brilliantov Kirill Vladimirovich

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

VENKTESH GUTTEDAR | 29 Oct 08:05 2014
Picon

Need Guidance on Appending new data to existing field of Json data type

Hello,

    As i am new to Postgresql, i am learning through experimenting things.

    i have a table with json data type field, so there is some data for example :
       
        { [ { a:b, b:c } ] }

    and now if i append data then it should be like :

        { [ { a:b, b:c }, { e:f, g:h } ] }

    Is there any way to achieve this. please help.!

    I have Postgresql 9.3.5

--
Regards :
Venktesh Guttedar.


Gmane