xbzhang | 23 Apr 05:07 2014
Picon

Are there rules for add parameter in postgresql.conf?

I want to add some function for INSERT statement, 
but i can not decide that i need add one parameter
in postgresql.conf or need add parameter in INSERT
statement.
Are there some rules for developer to decide it ?


张晓博   研发二部

北京人大金仓信息技术股份有限公司

地址:北京市海淀区上地西路八号院上地科技大厦4号楼501

邮编:100085

电话:(010) 5885 1118 - 8450

手机:15311394463

邮箱:xbzhang <at> kingbase.com.cn

AI Rumman | 23 Apr 01:58 2014
Picon

Is it good to have toast table for information schema table?

Hi,

Is it good to have toast table for information schema table? I am using Postgresql 8.4 and current state is:

select datname, datfrozenxid from pg_database;           
  datname  | datfrozenxid 
-----------+--------------
 template1 |   1462730397
 template0 |   1462741467
 postgres  |   1562754912
 jangles   |   1459615432
(4 rows)


select * from pg_class  where relfrozenxid  = 1459615432;
    relname     | reltoastidxid | relhasindex | relfrozenxid 
----------------+---------------+-------------+--------------
 pg_toast_11447 |         11451 | t           |   1459615432 
(1 row)


select 11447::regclass;                                  
            regclass             
---------------------------------
 information_schema.sql_features
(1 row)

Please advice.

Thanks.
Wells Oliver | 23 Apr 00:52 2014
Picon

Using 9.3 as a slave to 9.1 for upgrade purposes

As a way of upgrading, I'd like to setup a 9.3 cluster as a slave to a 9.1 master so that I can then promote that 9.3 instance to master, using streaming replication.

Curious if this is a possible/advisable route.

--
Wells Oliver
wellsoliver <at> gmail.com
Tomas Vondra | 23 Apr 00:24 2014
Picon

aggregate returning anyarray and 'cannot determine result data type'

Hi all,

I needed to implement an aggregate producing a random sample, with an
upper bound on the number of items. I.e. not the usual "5% of values"
but "up to 1000 values".

So my plan was to do something like this:

  sample_append(internal, anyelement, int) -> internal
  sample_final(internal) -> anyarray

  CREATE AGGREGATE sample_agg(anyelement, int) (
      SFUNC = sample_append,
      STYPE = internal,
      FINALFUNC = sample_final
  );

where 'internal' represents a pointer to a structure with all the info
(limit, currently accumulated sample, ...).

However this leads to

  ERROR:  cannot determine result data type
  DETAIL:  A function returning a polymorphic type must have at least
           one polymorphic argument

because 'sample_final' produces anyarray but has no polymorphic
argument. Sadly, the 'natural' solution of using anyarray instead of the
internal structure does not work because I have no way to pass the other
parameters to the final function (the last int in sfunc).

Any ideas how to solve this?

regards
Tomas

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Craig Libscomb | 22 Apr 23:01 2014
Picon

Re: [SOLVED] Can't delete role because of unknown object

On Tue, Apr 22, 2014 at 3:30 PM, Tom Lane <tgl <at> sss.pgh.pa.us> wrote:
>
> Craig Libscomb <craiglibscomb1972 <at> gmail.com> writes:
> > The following command:
> > DROP USER IF EXISTS jpate;
>
> > generates the following output:
> > ERROR:  role "jpate" cannot be dropped because some objects depend on it
> > DETAIL:  1 object in database products
>
> > It would be most helpful to know what object in the products database
> > depends on the jpate role, but I am unable to find anything that even
> > begins to offer a clue. What command will show me this mysterious object,
> > please?
>
> You need to connect to that database and try the DROP USER from there.
> It's not possible to produce very much info about the problematic
> object when not connected to its database.  (I suppose we could tell
> you the object kind, ie table/function/etc, but not more than that.)

*sigh*, yep, I was connected to the wrong database. After connecting
the error told me the dependency was privileges on the public schema.
REVOKEing those then allowed me to DROP the user.

Thanks!

>
>
>                         regards, tom lane

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Craig Libscomb | 22 Apr 21:47 2014
Picon

Can't delete role because of unknown object

The following command:
DROP USER IF EXISTS jpate;

generates the following output:
ERROR:  role "jpate" cannot be dropped because some objects depend on it
DETAIL:  1 object in database products

It would be most helpful to know what object in the products database depends on the jpate role, but I am unable to find anything that even begins to offer a clue. What command will show me this mysterious object, please?

Thanks,
Craig
Sim Zacks | 22 Apr 16:39 2014
Picon

importing downloaded data

Postgresql 9.3
I am downloading data that I want to import into a table. The data comes in tab delimited, CRLF format.
I am using plpython to get the data and I wanted to use copy with stdin to import it without having to save it to a file.
To do this, I am setting sys.stdin to a StringIO object with my data inside.

However, when I call copy I am getting SPI_ERROR_COPY, which the docs define as "if COPY TO stdout or COPY FROM stdin was attempted"  which is exactly what I am trying to do (so the error is logical).

Is there another method of loading delimited data in bulk without saving it as a file?

Thanks
Sim
Ian Barwick | 22 Apr 14:21 2014

Re: Fwd: How to ignore blank lines with file_fdw

On 22/04/14 21:09, Nicklas Avén wrote:

> Hallo
>
> I am struggling to find the best solution to ignore blank lines in
> csv-file when using file_fdw.
>
> A blank line makes the table unreadable.
>
> I would like to avoid manipulating the file directly and avoid the
> need to make a new corrected copy of the file.
>
> I am on Linux so I have found a solution when using COPY:
> COPY test_table from program 'sed ''/^ *$/d''
> /opt/builds/inotify_test/test.csv'  with (format 'csv', header
> 'true');
>
> but since the "program" option not seems to be implemented in file_fdw
> I am still searching for a solution.

file_fdw uses the same mechanism internally as "COPY <table> FROM '/file.csv'";
I don't think there's currently a way for this mechanism to ignore blank
lines.

Unfortunately CSV is not exactly a well-defined standard, so it's debatable
whether it's worth modifying the mechanism to cope with this situation.
The closest thing to a standard, RFC 4180 ( http://tools.ietf.org/html/rfc4180 )
doesn't seem to have anything to say about them; on the other hand LibreOffice
Calc will happily import files with blank lines.

> I have also found in an email from 2011
> http://www.postgresql.org/message-id/4E699DE6.8010606 <at> gmail.com
>
> that when force_not_null was implemented in file_fdw the patch also
> included "some cosmetic changes such as removing useless blank lines."
> But I do not find that blank lines is removed in general since I
> cannot read csv-files with blank lines, and I do not understand how
> the option "force_not_null" can do the trick since that is on the
> column level and not lines/row.

The "blank lines" referred to here are in the source code itself.

Regards

Ian Barwick

-- 
  Ian Barwick                   http://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Torsten Förtsch | 22 Apr 14:10 2014
Picon
Picon

Is this a planner bug?

Hi,

I got this plan:

Limit  (cost=0.00..1.12 rows=1 width=0)
   ->  Seq Scan on fmb  (cost=0.00..6964734.35 rows=6237993 width=0)
         Filter: ...

The table has ~80,000,000 rows. So, the filter, according to the plan,
filters out >90% of the rows. Although the cost for the first row to
come out of the seqscan might be 0, the cost for the first row to pass
the filter and, hence, to hit the limit node is probably higher.

Thanks,
Torsten

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Tim Kane | 22 Apr 14:01 2014
Picon

hstore binary representation of keys

Hi all,

I’ve been using hstore to record various key/val pairs, but I’ve noticed it consumes a lot more disk than I would have expected.
I don’t have any hard figures to illustrate, but empirical testing has shown that if I record these pairs as traditional column based fields, I can save a significant amount of disk.

What I think I’m seeing here, is that the hstore representation needs to store the entirety of the key alongside each value.

Let’s say I have a table of 1000 records, and 900 of them have a key named ‘A_REALLY_REALLY_REALLY_LONG_KEY’, then this key will be written do disk 900 times, along with the appropriate values.


I guess there are two options open to me here.

  1. I could transpose these values into a dedicated field
  2. I could use shorter key names

Does hstore2 go any way to improving this situation? Some kind of enumerated key based system?



Cheers,

TIm

Nicklas Avén | 22 Apr 13:45 2014
Picon

How to ignore blank lines with file_fdw

Hallo

I am struggling to find the best solution to ignore blank lines in csv-file when using file_fdw.

A blank line makes the table unreadable.

I would like to avoid manipulating the file directly and avoid the need to make a new corrected copy of the file.

I am on Linux so I have found a solution when using COPY:
COPY test_table from program 'sed ''/^ *$/d'' /opt/builds/inotify_test/test.csv'  with (format 'csv', header 'true');

but since the "program" option not seems to be implemented in file_fdw I am still searching for a solution.

I have also found in an email from 2011 
http://www.postgresql.org/message-id/4E699DE6.8010606 <at> gmail.com

that when force_not_null was implemented in file_fdw the patch also included "some cosmetic changes such as removing useless blank lines."
But I do not find that blank lines is removed in general since I cannot read csv-files with blank lines, and I do not understand how the option "force_not_null" can do the trick since that is on the column level and not lines/row.

Any good ideas out there?

Thanks
Nicklas Avén

Gmane