slackman | 1 Feb 02:09 2006
Picon

where i can find ?

Hi all,
where i can find tutorial using plpython in PostgreSQL???
is there any body can help me??? 

Thank you?? 

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Michael Fuhr | 1 Feb 03:15 2006

Re: where i can find ?

On Wed, Feb 01, 2006 at 08:09:48AM +0700, slackman <at> unika.ac.id wrote:
> where i can find tutorial using plpython in PostgreSQL???

If search engines don't find anything then there might not be any.
However, there are Python tutorials and PostgreSQL tutorials; between
those and the PL/Python documentation there should be enough
information to learn how to write PL/Python functions.  Are you
having troubles?

--

-- 
Michael Fuhr

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Guido Barosio | 1 Feb 03:21 2006
Picon

Re: where i can find ?

FYI

http://www.postgresql.org/docs/8.1/interactive/plpython.html

Best regards,
Guido Barosio

On 2/1/06, Michael Fuhr <mike <at> fuhr.org> wrote:
On Wed, Feb 01, 2006 at 08:09:48AM +0700, slackman <at> unika.ac.id wrote:
> where i can find tutorial using plpython in PostgreSQL???

If search engines don't find anything then there might not be any.
However, there are Python tutorials and PostgreSQL tutorials; between
those and the PL/Python documentation there should be enough
information to learn how to write PL/Python functions.  Are you
having troubles?

--
Michael Fuhr

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org



--
/"\   ASCII Ribbon Campaign  .
\ / - NO HTML/RTF in e-mail  .
X  - NO Word docs in e-mail .
/ \ -----------------------------------------------------------------
Düster Horst | 1 Feb 08:35 2006
Picon

Re: query to select the table primary key column name

Uwe

Thank you very much for your help. That was exactly where I looked for.

Best regards
 
Dr. Horst Düster 
GIS-Koordinator, Stv. Amtsvorsteher

Kanton Solothurn 
Amt für Geoinformation
Abteilung SO!GIS Koordination
Rötistrasse 4
CH-4501 Solothurn 

Tel.: ++41 (0)32 627 25 32 
Fax: ++41 (0)32 627 22 14

horst.duester <at> bd.so.ch 
www.sogis.so.ch

> -----Ursprüngliche Nachricht-----
> Von: Uwe C. Schroeder [mailto:uwe <at> oss4u.com]
> Gesendet am: Dienstag, 31. Januar 2006 19:29
> An: pgsql-admin <at> postgresql.org
> Cc: Duster Horst; 'pgsql-admin <at> postgresql.org'
> Betreff: Re: [ADMIN] query to select the table primary key column name
> 
> Check the information_schema views. Particularly you want to look into
> information_schema.key_column_usage and 
> information_schema.table_contraints
> 
> The later gives you the constraint type (in your case 
> 'PRIMARY KEY') and the 
> constraint name which you then can match to the constraint name in 
> key_column_usage to get all the columns that make up the primary key.
> 
> Hope that helps.
> 
> BTW: using the information schema is the bettwe way to go for 
> system catalog 
> queries. Every major release will potentially have changes in 
> the system 
> catalogs, where the views in the information schema won't 
> change a lot. 
> They're designed to give you a stable API for the system catalogs.
> 
> 
> On Tuesday 31 January 2006 08:48, Düster Horst wrote:
> > I'm looking out for a system catalog query to select the 
> primary key column
> > name for a specific table. With the following query I only 
> get the name of
> > the primary key itself. How do I get the primary key column name???
> >
> > select pg_constraint.*,pg_index.indisprimary
> > from pg_constraint,pg_namespace, pg_class, pg_index
> > where pg_namespace.nspname='public'
> >   and pg_namespace.oid=c.connamespace
> >   and pg_class.relname='new_layer'
> >   and pg_class.oid=c.conrelid
> >   and pg_class.oid=pg_index.indrelid
> >   and c.contype='p'
> >
> > I'll appeciate any help.
> >
> > with best regards
> >
> > Dr. Horst Düster
> >
> > Kanton Solothurn 
> > Amt für Geoinformation
> > Abteilung SO!GIS Koordination
> > Rötistrasse 4
> > CH-4501 Solothurn
> >
> > Tel.: ++41 (0)32 627 25 32
> > Fax: ++41 (0)32 627 22 14
> >
> > horst.duester <at> bd.so.ch
> > www.sogis.so.ch
> >
> >
> > ---------------------------(end of 
> broadcast)---------------------------
> > TIP 4: Have you searched our list archives?
> >
> >                http://archives.postgresql.org
> 
> -- 
> 	UC
> 
> --
> Open Source Solutions 4U, LLC	1618 Kelly St
> Phone:  +1 707 568 3056		Santa Rosa, CA 95401
> Cell:   +1 650 302 2405		United States
> Fax:    +1 707 568 6416
> 

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

navicat | 1 Feb 10:00 2006
Picon

Navicat PostgreSQL Manager for Mac OS X (ver. 6.1.3) is now available

Navicat PostgreSQL Manager for Mac OS X has been enhanced to support:
 
1. Database Roles (PostgreSQL version 8.1.0)
2. Syntax Highlight for SQL Editor
 
Major features of Navicat at a glance:

- Visual Query Builder
- SSH Tunnel for remote database connection
- Database transfer from server to server
- Data and Structure Synchronization
- Data backup and restore
- Import/ Export of most popular data formats to and from PostgreSQL
- Tasks Scheduling and Easy-to-use wizards tool
 
Navicat is a powerful yet easy to use PostgreSQL graphical interface that provides extensive functionality for managing and developing PostgreSQL database. Navicat is sophisticated enough for professional developers, yet easy to learn for new users. It is also available for Mac OS X.

A trial version can be downloaded at http://pgsql.navicat.com/download.html

To find out more details about this PostgreSQL GUI, please visit at: http://pgsql.navicat.com
Chris Browne | 1 Feb 18:14 2006
Picon

Re: autovacuum

matthew <at> zeut.net ("Matthew T. O'Connor") writes:
> Legit concern.  However one of the things that autovacuum is supposed to
> do is not vacuum tables that don't need it.  This can result in an overal
> reduction in vacuum overhead.  In addition, if you see that autovacuum is
> firing off vacuum commands during the day and they are impacting your
> response time, then you can play with the vacuum cost delay settings that
> are design to throttle down the IO impact vacuum commands can have.  In
> addition if you use 8.1, you can set per table thresholds, per table
> vacuum cost delay settings, and autovacuum will respect the work done by
> non-autovacuum vacuum commands.  Meaning that if you manually vacuum
> tables at night during a maintenance window, autovacuum will take that
> into account.  Contrib autovacuum couldn't do this.
>
> Hope that helps.  Real world feed-back is always welcome.

I have a question/suggestion...

Something we found useful with Slony-I was the notion of checking the
eldest XID on the system to see if there was any point at all in
bothering to vacuum.  I don't see anything analagous in autovacuum.c;
this might well be a useful addition.

In the Slony-I cleanup thread loop, we collect, in each iteration, the
current earliest XID.

In each iteration of this loop, we check to see if that XID has
changed.

- First time thru, it changes from 0 to 'some value' and so tries to do
  a vacuum.

- But supposing you have some long running transaction (say, a pg_dump
  that runs for 2h), it becomes pretty futile to bother trying to
  vacuum things for the duration of that transaction, because that
  long running transaction will, via MVCC, hold onto any old tuples.

It strikes me as a slick idea for autovacuum to take on that
behaviour.  If the daily backup runs for 2h, then it is quite futile
to bother vacuuming a table multiple times during that 2h period when
none of the tuples obsoleted during the 2h period will be able to be
cleaned out until the end.

Presumably this means that, during that 2h period, pg_autovacuum would
probably only issue ANALYZE statements...
--

-- 
let name="cbbrowne" and tld="ntlug.org" in String.concat " <at> " [name;tld];;
http://www.ntlug.org/~cbbrowne/languages.html
Rules of  the Evil Overlord #51.  "If one of my  dungeon guards begins
expressing  concern over  the  conditions in  the beautiful  princess'
cell,  I  will immediately  transfer  him  to  a less  people-oriented
position." <http://www.eviloverlord.com/>

---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
       choose an index scan if your joining column's datatypes do not
       match

Alvaro Herrera | 1 Feb 18:29 2006
Picon

Re: [HACKERS] autovacuum

Chris Browne wrote:

> It strikes me as a slick idea for autovacuum to take on that
> behaviour.  If the daily backup runs for 2h, then it is quite futile
> to bother vacuuming a table multiple times during that 2h period when
> none of the tuples obsoleted during the 2h period will be able to be
> cleaned out until the end.

Hmm, yeah, sounds useful.  There's one implementation issue to notice
however, and it's that the autovacuum process dies and restarts for each
iteration, so there's no way for it to remember previous state unless
it's saved somewhere permanent, as the stats info is.

However this seems at least slightly redundant with the "maintenance
window" feature -- you could set a high barrier to vacuum during the
daily backup period instead.  (Anybody up for doing this job?)

--

-- 
Alvaro Herrera                  http://www.amazon.com/gp/registry/5ZYLFMCVHXC
"No single strategy is always right (Unless the boss says so)"
                                                  (Larry Wall)

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Matthew T. O'Connor | 1 Feb 18:43 2006
Picon

Re: autovacuum

Alvaro Herrera wrote:
> Chris Browne wrote:
> 
>> It strikes me as a slick idea for autovacuum to take on that
>> behaviour.  If the daily backup runs for 2h, then it is quite futile
>> to bother vacuuming a table multiple times during that 2h period when
>> none of the tuples obsoleted during the 2h period will be able to be
>> cleaned out until the end.
> 
> Hmm, yeah, sounds useful.  There's one implementation issue to notice
> however, and it's that the autovacuum process dies and restarts for each
> iteration, so there's no way for it to remember previous state unless
> it's saved somewhere permanent, as the stats info is.
> 
> However this seems at least slightly redundant with the "maintenance
> window" feature -- you could set a high barrier to vacuum during the
> daily backup period instead.  (Anybody up for doing this job?)

I can't promise anything, but it's on my list of things to hopefully 
find time for in the coming months.  No way I can start it in Feb, but 
maybe sometime in March.  Anyone else?

Matt

---------------------------(end of broadcast)---------------------------
TIP 4: Have you searched our list archives?

               http://archives.postgresql.org

Tom Lane | 1 Feb 19:18 2006
Picon

Re: autovacuum

Alvaro Herrera <alvherre <at> alvh.no-ip.org> writes:
> Hmm, yeah, sounds useful.  There's one implementation issue to notice
> however, and it's that the autovacuum process dies and restarts for each
> iteration, so there's no way for it to remember previous state unless
> it's saved somewhere permanent, as the stats info is.

I think you'd really need to remember the previous oldest XID on a
per-table basis to get full traction out of the idea.  But weren't we
thinking of tracking something isomorphic to this for purposes of
minimizing anti-wraparound VACUUMs?

			regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Chris Browne | 1 Feb 21:50 2006
Picon

Re: autovacuum

matthew <at> zeut.net ("Matthew T. O'Connor") writes:
> Hope that helps.  Real world feed-back is always welcome.

While I'm at it, I should throw in an idea that I had a little while
back about a "vacuum request manager."

This is kind of orthogonal to everything else that has been happening
with pg_autovacuum...

One of the troubles we have been hitting with our homebrew scripts is
when locking doesn't turn out, and they start submitting multiple
vacuums at once, which sometimes builds up "to ill."

A thought I had was to create a daemon that would serially process
requests.  It would just watch a table of requests, and when it finds
work, start work.

We'd then have some sort of "injection" process that would tell the
daemon "Here's new work!"

Requests would be defined thus:

/* cbbrowne <at> [local]/dba2 vacdb=*/ \d vacuum_requests
                  Table "public.vacuum_requests"
    Column    |           Type           |       Modifiers        
--------------+--------------------------+------------------------
 vtable       | text                     | not null
 vhost        | text                     | not null
 vdatabase    | text                     | not null
 urgency      | integer                  | not null default 1
 created_on   | timestamp with time zone | not null default now()
 completed_on | timestamp with time zone | 
 failed_at    | timestamp with time zone | 
Indexes:
    "vacuum_requests_pkey" primary key, btree (vtable, vhost, vdatabase, created_on)
    "vr_priority" btree (vhost, vdatabase, urgency) WHERE ((completed_on IS NULL) AND (failed_at IS NULL))

/* cbbrowne <at> [local]/dba2 vacdb=*/ \d vacuum_start
                   Table "public.vacuum_start"
    Column    |           Type           |       Modifiers        
--------------+--------------------------+------------------------
 vtable       | text                     | not null
 vhost        | text                     | not null
 vdatabase    | text                     | not null
 started_on   | timestamp with time zone | not null default now()
 completed_on | timestamp with time zone | 
Indexes:
    "vacuum_start_pkey" primary key, btree (vtable, vhost, vdatabase, started_on)

/* cbbrowne <at> [local]/dba2 vacdb=*/ \d vacuum_failures
                 Table "public.vacuum_failures"
   Column   |           Type           |       Modifiers        
------------+--------------------------+------------------------
 vtable     | text                     | not null
 vhost      | text                     | not null
 vdatabase  | text                     | not null
 started_on | timestamp with time zone | not null
 failed_on  | timestamp with time zone | not null default now()
Indexes:
    "vacuum_failures_pkey" primary key, btree (vtable, vhost, vdatabase, started_on)

This has a bit more generality than would be needed for handling just
one postmaster; host/database would allow this to be used to manage
multiple backends...

We have, in our "kludged-up scripts," three levels of granularity:

 1.  There are tables we vacuum every few minutes; they would be at
     urgency 1; every few minutes, we would, in effect, run the query...

     insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
       select t.fqtablename, h.hostname, tld.name, 1
          from urgent_tables t, all_hosts h, all_tlds tld;

 2.  Then, there are "hourly" tables, at urgency level 2.

     Once an hour, we run:

     insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
       select t.fqtablename, h.hostname, tld.name, 2
          from hourly_tables t, all_hosts h, all_tlds tld;

 3.  Once a day, we'd do something kind of like:

     insert into vacuum_requests (vtable, vhost, vdatabase, urgency)
       select table_schema || '.' || table_name, h.hostname, tld.name, 3 
         from information_schema.tables, all_hosts h, all_tlds tld 
         where table_type = 'BASE TABLE' and table_schema in ('public', 'pg_catalog');

The event loop for the daemon would be to look up the highest priority
table, and add an entry to vacuum_start.

Then it vacuums the table.

If that succeeds, the table is marked as complete in both
vacuum_start, and, FOR EVERY ENTRY CURRENTLY OUTSTANDING, in
vacuum_requests.  Thus, if a table is queued up 20 times, it will be
vacuumed once, and marked as done 20 times.

If that fails, all the relevant entries in vacuum_start and
vacuum_requests are marked with the failure information, and a record
is added to the failures table.

We're putting this off, pending the thought that, with 8.1, it's worth
testing out pg_autovacuum again.

The above is an "in-the-database" way of queueing up requests,
associating priorities to them, and having the queue be
administrator-visible.

We were anticipating using our present quasi-kludgy scripts to add our
favorite tables to the queue; it would seem a nice/natural thing for
there to be some automatic process (ala the pg_autovacuum daemon) that
could add things to the queue based on its knowledge of updates.

My thought is that if anything about the above appears useful to
pg_autovacuum, I'd be happy if pg_autovacuum grabbed (stole? ;-)) some
of the ideas.
--

-- 
"cbbrowne"," <at> ","cbbrowne.com"
http://cbbrowne.com/info/sap.html
"The X-Files are too optimistic.  The truth is *not* out there..."
-- Anthony Ord <nws <at> rollingthunder.co.uk>

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to majordomo <at> postgresql.org so that your
       message can get through to the mailing list cleanly


Gmane