Merlin Moncure | 1 Aug 05:28 2007
Picon

Re: Using EXECUTE in a function

On 7/31/07, Andreas Tille <tillea <at> rki.de> wrote:
http://www.physiol.ox.ac.uk/Computing/Online_Documentation/postgresql/plpgsql.html#PLPGSQL-OVERVIEW
>
>      Note:  The PL/pgSQL  EXECUTE statement is not related to the EXECUTE
>             statement supported by the PostgreSQL server. The server's EXECUTE
>             statement cannot be used within PL/pgSQL functions (and is not needed).

If I read the documentation correctly, EXECUTE is not needed because
query plans are generally cached within pl/pgsql after the first
execution of the function.

> I'm especially stumbling over the "is not needed" part.  My plan
> is to write a server side function (either SQL or pgsql) that wraps
> the output of a PREPAREd statement but I have no idea how to do this.
>
> The final task is to obtain some XML for of my data via a simple shell script
> that contains
>
>          psql -t MyDatabase -c 'SELECT * FROM MyFunction ($1, $2);'
>
> The task of MyFunction($1,$2) is to wrap up the main data into an XML
> header (just some text like
>      <?xml version="1.0" encoding="ISO-8859-1"?>
>        ...
> ) around the real data that will be obtained via a PREPAREd statement that is
> declared like this
>
>     PREPARE xml_data(int, int) AS ( SELECT ... WHERE id = $1 AND source = $2 );
>
> where "..." stands for wrapping the output into xml format.
(Continue reading)

Adriaan van Os | 1 Aug 10:27 2007
Picon

Re: TRUNCATE TABLE

Tom Lane wrote:

> Somehow, autovac is doing something that makes the filesystem go nuts
> every so often, and take an astonishingly long time to create an empty
> file.  But autovac itself doesn't create or delete any files, so what's
> up here?
> 
> Also, I was able to reproduce the variability in timing on HPUX and
> Darwin as well as Linux, so we can't put all the blame on ext3.
> (I didn't drill down to the strace level on the other two machines,
> though, so it's possible that there is a different mechanism at work
> there.)

Any news since this message ? Should I file a bug report ?

Regards,

Adriaan van Os

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Kevin Grittner | 1 Aug 16:22 2007

Re: TRUNCATE TABLE

>>> On Mon, Jul 16, 2007 at  7:18 PM, in message <25418.1184631498 <at> sss.pgh.pa.us>,
Tom Lane <tgl <at> sss.pgh.pa.us> wrote: 
> Somehow, autovac is doing something that makes the filesystem go nuts
> every so often, and take an astonishingly long time to create an empty
> file.  But autovac itself doesn't create or delete any files, so what's
> up here?

Have you ruled out checkpoints as the culprit?

-Kevin

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate

Alvaro Herrera | 1 Aug 17:11 2007

Re: TRUNCATE TABLE

Adriaan van Os wrote:
> Tom Lane wrote:
>
>> Somehow, autovac is doing something that makes the filesystem go nuts
>> every so often, and take an astonishingly long time to create an empty
>> file.  But autovac itself doesn't create or delete any files, so what's
>> up here?
>> Also, I was able to reproduce the variability in timing on HPUX and
>> Darwin as well as Linux, so we can't put all the blame on ext3.
>> (I didn't drill down to the strace level on the other two machines,
>> though, so it's possible that there is a different mechanism at work
>> there.)
>
> Any news since this message ? Should I file a bug report ?

Were you able to show that turning off autovacuum removes the
performance problem?

--

-- 
Alvaro Herrera                                http://www.CommandPrompt.com/
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings

Carlos H. Reimer | 1 Aug 17:12 2007
Picon

RES: Improving select peformance

Hi,

I have changed the view to eliminate the bizarre concatenation conditions
but even so the response time did not change.

Changing the join_collapse_limit from 8 to 1 caused the decrease in response
time.

Here is the explain analyze with the join_collapse_limit set to 1:

                                                                            
             QUERY PLAN
----------------------------------------------------------------------------
----------------------------------------------------------------------------
-------------------------------------
 Nested Loop Left Join  (cost=969.53..20638.03 rows=1 width=194) (actual
time=10.309..5405.701 rows=256 loops=1)
   Join Filter: ((ven.filpgt = vencodpgt.filpgt) AND (ven.codpgt =
vencodpgt.codpgt))
   ->  Nested Loop  (cost=969.53..20635.51 rows=1 width=198) (actual
time=10.211..5391.358 rows=256 loops=1)
         Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =
ive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))
         ->  Nested Loop  (cost=1.34..3410.10 rows=1 width=150) (actual
time=0.248..38.966 rows=414 loops=1)
               Join Filter: (sub.codsub = dsub.codtab)
               ->  Nested Loop  (cost=1.34..3409.04 rows=1 width=159)
(actual time=0.237..32.520 rows=414 loops=1)
                     Join Filter: ((gra.codtam)::text = ((sub.codite)::text
|| ''::text))
(Continue reading)

Alvaro Herrera | 1 Aug 18:52 2007

Re: RES: Improving select peformance

Carlos H. Reimer wrote:
> Hi,
> 
> I have changed the view to eliminate the bizarre concatenation conditions
> but even so the response time did not change.

Are you sure you did that?  In the EXPLAIN it's still possible to see
them, for example

>                ->  Nested Loop  (cost=1.34..3409.04 rows=1 width=159)
> (actual time=0.237..32.520 rows=414 loops=1)
>                      Join Filter: ((gra.codtam)::text = ((sub.codite)::text
> || ''::text))
>                      ->  Nested Loop  (cost=1.34..3376.84 rows=28 width=136)
> (actual time=0.226..20.978 rows=414 loops=1)
>                            ->  Hash Join  (cost=1.34..3356.99 rows=28
> width=145) (actual time=0.215..15.225 rows=414 loops=1)
>                                  Hash Cond: ((gra.codcor)::text =
> ((div.codite)::text || ''::text))

--

-- 
Alvaro Herrera                 http://www.amazon.com/gp/registry/CTMLCN8V17R4
"Uno combate cuando es necesario... ¡no cuando está de humor!
El humor es para el ganado, o para hacer el amor, o para tocar el
baliset.  No para combatir."  (Gurney Halleck)

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate
(Continue reading)

Jan Dittmer | 1 Aug 19:37 2007

Re: select on 1milion register = 6s

Scott Marlowe wrote:
>> random_page_cost = 1                    # units are one sequential page
>> fetch
> 
> Seldom if ever is it a good idea to bonk the planner on the head with
> random_page_cost=1.  setting it to 1.2 ot 1.4 is low enough, but 1.4
> to 2.0 is more realistic.

Which is probably the reason why the planner thinks a seq scan is
faster than an index scan...

Jan

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Luke Lonergan | 1 Aug 20:10 2007

Re: Postgres configuration for 64 CPUs, 128 GB RAM...

Marc,

You should expect that for the kind of OLAP workload you describe in steps 2
and 3 you will have exactly one CPU working for you in Postgres.

If you want to accelerate the speed of this processing by a factor of 100 or
more on this machine, you should try Greenplum DB which is Postgres 8.2
compatible.  Based on the overall setup you describe, you may have a hybrid
installation with GPDB doing the reporting / OLAP workload and the other
Postgres databases handling the customer workloads.

- Luke

On 7/24/07 7:38 AM, "Marc Mamin" <M.Mamin <at> intershop.de> wrote:

>  
> Hello,
> 
> thank you for all your comments and recommendations.
> 
> I'm aware that the conditions for this benchmark are not ideal, mostly
> due to the lack of time to prepare it. We will also need an additional
> benchmark on a less powerful - more realistic - server to better
> understand the scability of our application.
> 
> 
> Our application is based on java and is generating dynamic reports from
> log files content. Dynamic means here that a repor will be calculated
> from the postgres data the first time it is requested (it will  then be
> cached). Java is used to drive the data preparation and to
(Continue reading)

Carlos H. Reimer | 2 Aug 02:26 2007
Picon

RES: RES: Improving select peformance

Yes, but as the change did not alter the response time I used the original
view.

Anyway here are the response times using the changed view (without the
concatenation conditions):

with join_collapse_limit set to 8:
----------------------------------------------------------------------------
----------------------------------------------------------------------------
-------------------------------
 Nested Loop Left Join  (cost=963.68..76116.63 rows=1 width=194) (actual
time=8219.028..1316669.201 rows=256 loops=1)
   ->  Nested Loop  (cost=963.68..76116.23 rows=1 width=198) (actual
time=8196.502..1316638.186 rows=256 loops=1)
         Join Filter: ((gra.filmat = ive.filmat) AND (gra.codmat =
ive.codmat) AND (gra.codcor = ive.codcor) AND (gra.codtam = ive.codtam))
         ->  Nested Loop  (cost=1.11..3370.95 rows=1 width=150) (actual
time=33.058..255.428 rows=414 loops=1)
               Join Filter: (div.coddiv = ddiv.codtab)
               ->  Nested Loop  (cost=1.11..3369.89 rows=1 width=159)
(actual time=33.043..249.609 rows=414 loops=1)
                     Join Filter: (sub.codsub = dsub.codtab)
                     ->  Nested Loop  (cost=1.11..3368.82 rows=1 width=168)
(actual time=33.026..243.603 rows=414 loops=1)
                           Join Filter: ((gra.codcor)::text =
(div.codite)::text)
                           ->  Hash Join  (cost=1.11..3356.11 rows=9
width=145) (actual time=33.004..222.375 rows=414 loops=1)
                                 Hash Cond: ((gra.codtam)::text =
(sub.codite)::text)
(Continue reading)

Bryan Murphy | 2 Aug 18:02 2007
Picon

cpu throttling

We have a complicated stored procedure that we run frequently.  It
pegs one of our postmaster processes at 100% CPU utilization for a few
hours.  This has the unfortunate side effect of causing increased
latency for our other queries.  We are currently planning a fix, but
because of the complicated nature of this procedure it is going to
take some time to implement.

I've noticed that if I renice the process that is running the query,
the other postmaster processes are able to respond to our other
queries in a timely fashion.

My question:  Is there a way I can decrease the priority of a specific
query, or determine the PID of the process it is running in?  I'd like
to throw together a quick shell script if at all possible, as right
now I have to monitor the process manually and we'll have fixed the
problem long before we have the chance to implement proper database
clustering.

Bryan

---------------------------(end of broadcast)---------------------------
TIP 5: don't forget to increase your free space map settings


Gmane