Day, David | 2 Sep 22:19 2014

puzzled by "commit" Logging statement duration

Hoping for a teachable moment :+)

 

With options enabled to log statement execution times  in  the postgres log file I observe:

 

2014-09-02T12:47:38.107808-04:00 alabama local0 info postgres[37874]: [702-1] user=ace_db_client,db=ace_db LOG:  duration: 0.040 ms  statement: BEGIN

2014-09-02T12:47:38.108714-04:00 alabama local0 info postgres[37874]: [703-1] user=ace_db_client,db=ace_db LOG:  duration: 0.595 ms  statement: SELECT log.table_maintenance()

2014-09-02T12:47:38.218759-04:00 alabama local0 info postgres[37874]: [704-1] user=ace_db_client,db=ace_db LOG:  duration: 109.639 ms  statement: COMMIT

 

Why is the commit duration so large in [704-1] and the work was done ? in  [703-1]

 

Detail:

I have a client application attaching to a  postgres 9.3 server and periodically invoking  a server side function ( log.table_maintenace)

Autocommit is  enabled for the session,   would not any commit work have completed on the return from the select ? 

( I thought functions were auto-commit ? ,  I also note that the table_maintenance function returns VOID.

 

 

Client side code: C++ using the libpqxx library.

 

int log_table_maint(pqxx::work &t, const IpmPtr ipm ) {

    int err = 0;

 

    try {

        t.exec("SELECT log.table_maintenance()");

        t.commit();

    }

    catch ( const std::exception &e ) {

        t.abort();

        err = status = DB_STAT_FAIL;

    }

 

    return err;

}

 

When attached locally to the server and running the same command from the psql shell I observe:

 

ace_db=# explain analyze select log.table_maintenance();

                                     QUERY PLAN

------------------------------------------------------------------------------------

Result  (cost=0.00..0.26 rows=1 width=0) (actual time=1.433..1.439 rows=1 loops=1)

Total runtime: 1.550 ms

(2 rows)

 

 

A bit puzzled

 

 

Thanks for any thoughts or illumination.

 

Regards

 

 

Dave Day

Patrick Dung | 2 Sep 20:14 2014
Picon

Question about gin index not used on a tsv column

Hello Postgresql users,

In my setting, I found that sometimes the query does not use the gin index built for a tsv column.

Attached file provide more info (with explain analyze).

Thanks and regards,
Patrick
jobs=> explain analyze SELECT company,industry FROM jobs WHERE tsv  <at>  <at>  to_tsquery('english', 'travel');
                                                        QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..35337.69 rows=61625 width=67) (actual time=0.122..59909.736 rows=61114 loops=1)
   ->  Seq Scan on jobs  (cost=0.00..0.00 rows=1 width=64) (actual time=0.001..0.001 rows=0 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
   ->  Seq Scan on jobs_2014p  (cost=0.00..25629.21 rows=46735 width=64) (actual time=0.121..47191.053
rows=46142 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
         Rows Removed by Filter: 243194
   ->  Seq Scan on jobs_2013p  (cost=0.00..9708.48 rows=14889 width=75) (actual time=5.325..12708.878
rows=14972 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
         Rows Removed by Filter: 86893
 Total runtime: 59917.092 ms
(10 rows)

jobs=> set random_page_cost = 2;
SET
jobs=> explain analyze SELECT company,industry FROM jobs WHERE tsv  <at>  <at>  to_tsquery('english', 'travel');
                                                                     QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..34676.86 rows=61625 width=67) (actual time=280.185..2151.618 rows=61117 loops=1)
   ->  Seq Scan on jobs  (cost=0.00..0.00 rows=1 width=64) (actual time=0.001..0.001 rows=0 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
   ->  Bitmap Heap Scan on jobs_2014p  (cost=2352.20..24968.38 rows=46735 width=64) (actual
time=280.184..1784.566 rows=46143 loops=1)
         Recheck Cond: (tsv  <at>  <at>  '''travel'''::tsquery)
         ->  Bitmap Index Scan on jobs_2014p_tsv_gin_idx  (cost=0.00..2340.51 rows=46735 width=0) (actual
time=277.210..277.210 rows=46143 loops=1)
               Index Cond: (tsv  <at>  <at>  '''travel'''::tsquery)
   ->  Seq Scan on jobs_2013p  (cost=0.00..9708.48 rows=14889 width=75) (actual time=0.069..361.839
rows=14974 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
         Rows Removed by Filter: 86908
 Total runtime: 2154.907 ms
(11 rows)

jobs=> set random_page_cost = 4;
SET
jobs=> explain analyze SELECT company,industry FROM jobs WHERE tsv  <at>  <at>  to_tsquery('english', 'travel');
                                                        QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
 Append  (cost=0.00..35337.69 rows=61625 width=67) (actual time=0.229..3462.236 rows=61117 loops=1)
   ->  Seq Scan on jobs  (cost=0.00..0.00 rows=1 width=64) (actual time=0.001..0.001 rows=0 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
   ->  Seq Scan on jobs_2014p  (cost=0.00..25629.21 rows=46735 width=64) (actual time=0.228..2898.271
rows=46143 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
         Rows Removed by Filter: 243213
   ->  Seq Scan on jobs_2013p  (cost=0.00..9708.48 rows=14889 width=75) (actual time=44.810..556.103
rows=14974 loops=1)
         Filter: (tsv  <at>  <at>  '''travel'''::tsquery)
         Rows Removed by Filter: 86912
 Total runtime: 3468.134 ms
(10 rows)

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Philip Rhoades | 2 Sep 05:05 2014
Picon

Displaying a wide column with psql

People,

I have a table with a field that has up to 1,000 chars in it, when I 
look at it with psql it only displays about 200 chars with a "&hellip;" 
on the end of the string.  Even when I look at just one record and use 
the \x option I get the same result - I googled for ages but short of 
using pg_dump, I couldn't see an interactive way of seeing the whole 
field.  Is it possible?

Thanks,

Phil.
-- 
Philip Rhoades

GPO Box 3411
Sydney NSW	2001
Australia
E-mail:  phil <at> pricom.com.au

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

swaroop | 1 Sep 17:24 2014
Picon

copymanager question

Hello all

I am using cm instead of preparedstmts for faster inserts . But have a
question on inserting
values which have quotes in them (maybe single or double). Using
preparedstmts, we know 
its safe to do so by calling setString. Is it possible to do the same here
as i do not want
to blindly single quote a string value in copyin. 

Basically i am looking for the same robustness of preparedstmt <at> setString

Can i use functions like quote_literal ? Any help is very much appreciated.

thanks

--
View this message in context: http://postgresql.1045698.n5.nabble.com/copymanager-question-tp5817197.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Arnaud Lesauvage | 1 Sep 15:14 2014
Picon

Expected behaviour of \d in regexp with exponent numbers ?

Hi all,

I just came accross this trying to upgrade my server from 8.4.8 to 9.3.4 :

SELECT substring('²' FROM E'\\d');

8.4 : NULL
9.3 : "²"

Am I correct to expect NULL in this case ?
Thanks !

--
Arnaud

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Shida Sato | 31 Aug 23:05 2014
Picon

About limit on cube dimensions

Hi 

Why is there limit on the number of cube dimensions?
It is bit strange because ARRAY has no such limit which is similar to cube.
Does it relate to Rtree?
Can I use 10000 dimensional cube without R-tree?

---
sato
pasman.p@gmail.com | 31 Aug 16:22 2014
Picon

Small fix in PG 9.4b2 - pg_env.sh

Hi.

I propose to add in pg_env.sh line

export LD_LIBRARY_PATH=<pg_lib_dir>:$LD_LIBRARY_PATH

Postgres not need it but PgAdmin yes.

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Patrick Dung | 30 Aug 06:38 2014
Picon

How to make use of partitioned table for faster query?

Hello Postgresql users,

Suppose the table 'attendance' is very large:
id bigint
student_name varchar
late boolean
record_timestamp timestamp

The table is already partitioned by year (attendance_2012p, attendance_2013p, ...).
I would like to count the number of lates by year.

Instead of specifying the partition tables name:
select count(*) from attendance_2012p where student_nam e="Student A" and late='true';
select count(*) from attendance_2013p where student_name="Student A" and late='true';
select count(*) from attendance_2014p where student_name="Student A" and late='true';
...

Is it possible to query the master table attendance), and the query could make use of the partitioned table for faster query?

Thanks and regards,
Patrick
Emi Lu | 29 Aug 20:33 2014
Picon
Picon

alter column to varchar without view drop/re-creation

Hello list,

May I know is there a way to "alter column type to varchar" (previous is varchar(***)) without view drop/re-creation?

Basically, looking for a way to change column without have to drop/re-create dependent views.

varchar(***) to varchar and no date/numeric changes.

Thanks a lot!
Emi
---
PostgreSQL 8.3.18 on x86_64
Patrick Dung | 29 Aug 20:23 2014
Picon

Is there a function to save schema history internally?

Hello Postgresql users,

Is there a function to save schema history internally?
By keeping the schema history inside the DB, we can keep track of what and when is changed in the schema.

While searching google. It seems it is a limitation with the audit trigger:

Thanks and regards,
Patrick
Emmanuel Medernach | 29 Aug 15:52 2014
Picon
Picon

Performance issue with postgres_fdw

Hello,

I use  Postgres version 9.3.5  and spot a  performance issue
with postgres_fdw.

I  have a  table  object_003_xyz with  275000  lines and  is
exported to the master node as master_object_003_xyz.

( The  following query  is  only a  part  of an  automatically
generated complex query. )

On the master:

SELECT * FROM master_object_003_xyz AS o1, master_object_003_xyz AS o2 WHERE 
o1.objectid <> o2.objectid  AND cos(radians(o1.ra_PS))  * 
cos(radians(o1.decl_PS)) BETWEEN cos(radians(o2.ra_PS))  * 
cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2))  * 2 AND 
cos(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + sin(radians(1.5e-5  / 2)) 
  * 2  AND sin(radians(o1.ra_PS))  * cos(radians(o1.decl_PS)) BETWEEN 
sin(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2)) 
  * 2 AND sin(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + 
sin(radians(1.5e-5  / 2))  * 2 AND sin(radians(o1.decl_PS)) BETWEEN 
sin(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2))  * 2 AND 
sin(radians(o2.decl_PS))  + sin(radians(1.5e-5  / 2))  * 2 AND 
degrees(asin(sqrt(power(sin(radians((o2.decl_PS  - o1.decl_PS)  / 2)), 2)  + 
power(sin(radians((o2.ra_PS  - o1.ra_PS)  / 2)), 2)  * cos(radians(o1.decl_PS)) 
* cos(radians(o2.decl_PS))))  * 2) <= 1.5e-5
(4 rows)
Time: 513711.684 ms

Here is the plan used:

  Nested Loop  (cost=200.70..44187032.64 rows=34518880 width=2168)
    ->  Foreign Scan on master_object_003_xyz o2  (cost=100.00..24294.47 
rows=275449 width=1084)
    ->  Foreign Scan on master_object_003_xyz o1  (cost=100.70..160.32 rows=1 
width=1084)

On the pool:

SELECT * FROM object_003_xyz AS o1, object_003_xyz AS o2 WHERE o1.objectid <> 
o2.objectid  AND cos(radians(o1.ra_PS))  * cos(radians(o1.decl_PS)) BETWEEN 
cos(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2)) 
  * 2 AND cos(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + 
sin(radians(1.5e-5  / 2))  * 2  AND sin(radians(o1.ra_PS))  * 
cos(radians(o1.decl_PS)) BETWEEN sin(radians(o2.ra_PS))  * 
cos(radians(o2.decl_PS))  - sin(radians(1.5e-5  / 2))  * 2 AND 
sin(radians(o2.ra_PS))  * cos(radians(o2.decl_PS))  + sin(radians(1.5e-5  / 2)) 
  * 2 AND sin(radians(o1.decl_PS)) BETWEEN sin(radians(o2.decl_PS))  - 
sin(radians(1.5e-5  / 2))  * 2 AND sin(radians(o2.decl_PS))  + 
sin(radians(1.5e-5  / 2))  * 2 AND
degrees(asin(sqrt(power(sin(radians((o2.decl_PS  - o1.decl_PS)  / 2)), 2)  + 
power(sin(radians((o2.ra_PS  - o1.ra_PS)  / 2)), 2)  * cos(radians(o1.decl_PS)) 
* cos(radians(o2.decl_PS))))  * 2) <= 1.5e-5
(4 rows)
Time: 2738.217 ms

It is much faster because it uses available index :

  Nested Loop  (cost=0.56..360279717.93 rows=34692216 width=2168)
    ->  Seq Scan on object_003_xyz o2  (cost=0.00..18685.49 rows=275449 width=1084)
    ->  Index Scan using object_003_xyz_idx_xyz on object_003_xyz o1 
(cost=0.56..1306.64 rows=126 width=1084)
          Index Cond: (((cos(radians(ra_ps)) * cos(radians(decl_ps))) >= 
((cos(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) - 
2.61799387799149e-07::double precision)) AND ((cos(radians(ra_ps)) * 
cos(radians(decl_ps))) <= ((cos(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) + 
2.61799387799149e-07::double precision)) AND ((sin(radians(ra_ps)) * 
cos(radians(decl_ps))) >= ((sin(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) - 
2.61799387799149e-07::double precision)) AND ((sin(radians(ra_ps)) * 
cos(radians(decl_ps))) <= ((sin(radians(o2.ra_ps)) * cos(radians(o2.decl_ps))) + 
2.61799387799149e-07::double precision)) AND (sin(radians(decl_ps)) >= 
(sin(radians(o2.decl_ps)) - 2.61799387799149e-07::double precision)) AND 
(sin(radians(decl_ps)) <= (sin(radians(o2.decl_ps)) + 
2.61799387799149e-07::double precision)))
          Filter: ((objectid <> o2.objectid) AND 
(degrees((asin(sqrt((power(sin(radians(((o2.decl_ps - decl_ps) / 2::double 
precision))), 2::double precision) + ((power(sin(radians(((o2.ra_ps - ra_ps) / 
2::double precision))), 2::double precision) * cos(radians(decl_ps))) * 
cos(radians(o2.decl_ps)))))) * 2::double precision)) <= 1.5e-05::double precision))

Would it be possible to avoid doing a nested loop of foreign
scans when  dealing with tables on  the same pool  ?  And to
automatically export the query directly in that case ?

What may I do for now ?

Thanks for your help and best regards,
-- 
Emmanuel Medernach

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Gmane