Dan S | 28 Jul 18:18 2014
Picon

strange result from query, bug ?

Hi,

I'm running "PostgreSQL 9.3.5 on i686-pc-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 32-bit"

I've run into a strange problem with a query.
I wonder if it might be a bug or a misunderstanding from my side.

Steps to recreate the problem:

Generate the necessary test data:
create table random_draw( id int not null , constraint random_draw_id_pk primary key(id));

insert into random_draw
select *
from generate_series(1,1000);

Run this query several times:
select (select id from random_draw where id=((random()*999.0)::int)+1) as rnd_id, random(), *
from generate_series(1,1000);

The query sometimes give the error:
ERROR:  more than one row returned by a subquery used as an expression
********** Error **********

ERROR: more than one row returned by a subquery used as an expression
SQL state: 21000

somtimes the rnd_id column is null and sometimes it gives an expected answer (an integer between 1 and 1000)

Why does it sometimes say it returned more than one row ?
Why does it sometimes give null in rnd_id column ?
I would have expected the subquery get reexecuted for each row from generate_series
because the random() function in the where clause expression is volatile ?

Best Regards
Dan S

P.S.
I've since rewritten the query like below to get the expected results but I still thought I should ask if it is a bug.

with
cte as
(
    select generate_series,(random()*999.0)::int + 1 as id from generate_series(1,1000)
)
select (select id from random_draw where random_draw.id=cte.id) as rnd_id,random(),generate_series
from cte

François Beausoleil | 28 Jul 16:09 2014

Pairwise array sum aggregate function?

Hi all,

NOTE: Also posted to StackOverflow: http://stackoverflow.com/questions/24997131/pairwise-array-sum-aggregate-function

I have a table with arrays as one column, and I want to sum the array elements together:

> create table regres(a int[] not null);
> insert into regres values ('{1,2,3}'), ('{9, 12, 13}');
> select * from regres;
     a
-----------
 {1,2,3}
 {9,12,13}

I want the result to be:

{10, 14, 16}

that is: {1 + 9, 2 + 12, 3 + 13}.

Does such a function already exist somewhere? The intagg extension looked like a good candidate, but such a
function does not already exist.

The arrays are expected to be between 24 and 31 elements in length, all elements are NOT NULL, and the arrays
themselves will also always be NOT NULL. All elements are basic int. There will be more than two rows per aggregate.

My implementation target is:

 PostgreSQL 9.1.13 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5)
4.6.3, 64-bit

Thanks!
François

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

klo uo | 27 Jul 08:55 2014
Picon

Calltips in pgAdmin SQL Editor

Hi,

I believe SQL Editor in pgAdmin wx GUI is Scintilla.
Scintilla supports calltips (as in SciTE with *.api files).

If above is true, is there a way to make SQL Editor support calltips on user defined commands?

René Leonhardt | 26 Jul 16:56 2014
Picon

How to use pg_upgrade for beta versions

It has been recommended to pg_upgrade from 9.4 beta 1 to 2:
http://www.postgresql.org/message-id/20e710fd3e4790ee1051aa1e107d3708 <at> postgresql.org

How do I that on the Ubuntu 14.04.1 PPA?
http://www.postgresql.org/download/linux/ubuntu/

Upgrading directly is not possible:
$ sudo apt-get dist-upgrade
Preparing to unpack .../postgresql-9.4_9.4~beta2-1.pgdg14.04+1_amd64.deb ...
ERROR: The database format changed between beta 1 and 2. Please dump
your 9.4 clusters first and remove them before upgrading the package.
dpkg: error processing archive
/var/cache/apt/archives/postgresql-9.4_9.4~beta2-1.pgdg14.04+1_amd64.deb
(--unpack):
 subprocess new pre-installation script returned error exit status 1
Errors were encountered while processing:
 /var/cache/apt/archives/postgresql-9.4_9.4~beta2-1.pgdg14.04+1_amd64.deb
E: Sub-process /usr/bin/dpkg returned an error code (1)

Executing apt-get install -f afterwards shows the same error.

At least I am able to upgrade/install postgresql-contrib-9.4
(9.4~beta2-1.pgdg14.04+1), which gives me following tools:
$ ls -1 /usr/lib/postgresql/9.4/bin/
clusterdb
createdb
createlang
createuser
dropdb
droplang
dropuser
oid2name
pg_archivecleanup
pg_basebackup
pgbench
pg_dump
pg_dumpall
pg_isready
pg_receivexlog
pg_recvlogical
pg_restore
pg_standby
pg_test_fsync
pg_test_timing
psql
reindexdb
vacuumdb
vacuumlo

/etc/postgresql/9.4/main/postgresql.conf data_directory still points to
the Beta 1 cluster in /var/lib/postgresql/9.4/main

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Mike Christensen | 24 Jul 22:31 2014

Regular expression question with Postgres

I'm curious why this query returns 0:

SELECT 'AAA' ~ '^A{,4}$'

Yet, this query returns 1:

SELECT 'AAA' ~ '^A{0,4}$'

Is this a bug with the regular expression engine?  
William Nolf | 24 Jul 21:04 2014

copy/dump database to text/csv files

This is probably an easy one for most sql users but I don't use it very often.

 

We have a postgres database that was used for an application we no longer use.  However, we would

like to copy/dump the tables to files, text or csv so we can post them to sharepoint.

 

Copy seems to be what I want.  But it only does one table at a time.  There is 204 tables with a schemaname=public.  I need to be copy the tables with data to files.   I need something like a for

loop which checks if the table is populated if so, copy it to tablename.csv file

 

Any ideas?

 

thanks

Bill Nolf | Sr. Linux Engineer (RHCE)



8401 Greensboro Dr. #930
McLean, VA 22102
O: 703-275-8461


Seref Arikan | 24 Jul 19:33 2014
Picon

Are queries run completely before a Cursor can be used?

Greetings,
The documentation for Cursors at http://www.postgresql.org/docs/9.2/static/plpgsql-cursors.html says that:

"Rather than executing a whole query at once, it is possible to set up a cursor that encapsulates the query, and then read the query result a few rows at a time. One reason for doing this is to avoid memory overrun when the result contains a large number of rows"

I'm assuming the memory overrun mentioned here is the memory of the client process connecting to postres. I think when a cursor ref is returned, say from a function, the query needs to be completed and the results must be ready for the cursor to move forward.

If that is the case, there must be a temporary table, presumably with one or more parameters to adjust its size, (location/tablespace?) etc..

Is this how cursors work internally? I can't imagine the complexity of managing cursor operations in anything other than extremely simple sql queries.

Any comments and/or pointers to documentation which explains this would be much appreciated.

Best regards
Seref

Ramesh T | 24 Jul 16:22 2014
Picon

Re: tab_to_sting

SELECT 
qa.tab_to_largestringcheck(cast(array_agg(part_id)as t_varchar2_tab),':')FROM qa.part

when i replace string_agg it's return does not exit,

need to enable string_agg ..?i think is predefined right 






On Thu, Jul 24, 2014 at 7:47 PM, Ramesh T <rameshparnanditech <at> gmail.com> wrote:
postgres 9.3


On Thu, Jul 24, 2014 at 7:46 PM, Adrian Klaver <adrian.klaver <at> aklaver.com> wrote:
On 07/24/2014 07:11 AM, Ramesh T wrote:
hi ,
i looked into that link ,when i run string_agg does not exist returns
,But i'm using function here  not paasing table to the function only i'm
passing column name and delimiter to the function from select statement
please look into the my first post..

What version of Postgres are you using?

The query below should work:

SELECT deptno, string_agg(employee, ',')

FROM   emp
GROUP BY deptno;

thanks,
ram


--
Adrian Klaver
adrian.klaver <at> aklaver.com


Adrian Klaver | 24 Jul 16:16 2014

Re: tab_to_sting

On 07/24/2014 07:11 AM, Ramesh T wrote:
> hi ,
> i looked into that link ,when i run string_agg does not exist returns
> ,But i'm using function here  not paasing table to the function only i'm
> passing column name and delimiter to the function from select statement
> please look into the my first post..

What version of Postgres are you using?

The query below should work:

SELECT deptno, string_agg(employee, ',')
FROM   emp
GROUP BY deptno;

> thanks,
> ram

-- 
Adrian Klaver
adrian.klaver <at> aklaver.com

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Fabio Milano | 24 Jul 05:53 2014

Standby Server Bus 7 error

Hi,

 

Any assistance in interpreting the logs is much appreciated.

 

Replication server crashes. Below is snippet from log.

 

2014-07-22 23:36:23 EDT LOG:  started streaming WAL from pr

imary at 12/B0000000 on timeline 1

2014-07-22 23:43:12 EDT FATAL:  could not receive data from

WAL stream: server closed the connection unexpectedly

                This probably means the server terminated a

bnormally

                before or while processing the request.

 

cp: cannot stat '/var/lib/postgresql/archive/00000001000000

12000000D0': No such file or directory

2014-07-22 23:43:12 EDT LOG:  unexpected pageaddr 12/8D0000

00 in log segment 0000000100000012000000D0, offset 0

2014-07-22 23:43:28 EDT LOG:  restored log file "0000000100

000012000000D0" from archive

cp: cannot stat '/var/lib/postgresql/archive/00000001000000

12000000D1': No such file or directory

2014-07-22 23:43:28 EDT LOG:  unexpected pageaddr 12/A50000

00 in log segment 0000000100000012000000D1, offset 0

2014-07-22 23:43:28 EDT LOG:  started streaming WAL from primary at 12/D1000000 on timeline 1

2014-07-22 23:53:13 EDT FATAL:  could not receive data from WAL stream: server closed the connection unexpectedly

                This probably means the server terminated abnormally

                before or while processing the request.

 

cp: cannot stat '/var/lib/postgresql/archive/0000000100000012000000F1': No such file or directory

2014-07-22 23:53:13 EDT LOG:  unexpected pageaddr 12/AA000000 in log segment 0000000100000012000000F1, offset 0

2014-07-22 23:53:13 EDT LOG:  started streaming WAL from primary at 12/F1000000 on timeline 1

 

 

 

 

Upon trying a /etc/init.d/postgresql restart

 

2014-07-23 14:47:55 EDT LOG:  restored log file "0000000   00001B0000001D" from archive

2014-07-23 14:47:55 EDT LOG:  consistent recovery state    ched at 1B/1DFC64C0

2014-07-23 14:47:55 EDT LOG:  database system is ready t   ccept read only connections

2014-07-23 14:47:55 EDT LOG:  restored log file "0000000   00001B0000001E" from archive

2014-07-23 14:47:56 EDT LOG:  startup process (PID 730)     terminated by signal 7: Bus error

2014-07-23 14:47:56 EDT LOG:  terminating any other acti   server processes

 

Recovery.conf

 

restore_command = 'cp /var/lib/postgresql/archive/%f %p'

standby_mode = 'on'

primary_conninfo = 'host=[ipaddress] port=5432 user=sherweb_standby_server sslmode=require'

archive_cleanup_command = '/usr/lib/postgresql/9.3/bin/pg_archivecleanup /var/lib/postgresql/9.3/archive/ %r'

matt | 24 Jul 03:35 2014

Table checksum proposal

I have a suggestion for a table checksumming facility within PostgreSQL. 
The applications are reasonably obvious - detecting changes to tables,
validating data migrations, unit testing etc.  A possible algorithm is as
follows:

1. For each row of the table, take the binary representations of the
values and serialise them to CSV.
2. Calculate the MD5 sum of each CSV-serialised row.
3. XOR the row MD5 sums together.
4. CSV-serialise and MD5 a list of representations (of some sort) of the
types of the table's columns and XOR it with the rest.
5. Output the result as the table's checksum.

Advantages of this approach:

1. Easily implemented using SPI.
2. Since XOR is commutative and associative, order of ingestion of rows
doesn't matter; therefore, unlike some other table checksumming methods,
this doesn't need an expensive ORDER BY *.  So, this should be pretty much
as fast as a SELECT * FROM, which is probably as fast as a table checksum
can be.
3. Using a cursor in SPI, rows can be ingested a few at a time.  So memory
footprint is low even for large tables.
4. Output has a convenient fixed size of 128 bits.

Questions:

1. Should this be a contrib module which provides a function, or should it
be a built-in piece of functionality?
2. Is MD5 too heavyweight for this?  Would using a non-cryptographic
checksum be worth the speed boost?
3. Is there a risk of different architectures/versions returning different
checksums for tables which could be considered identical?  If so, is this
worth worrying about?

I have knocked up some sample code if anyone is interested.

Regards,

Matthew

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Gmane