ERR ORR | 24 Jan 22:44 2014
Picon

Postgresql 9.3.2 Server crashes upon start

The server crashes when I start it from command line and have not changed to the home directory of the postgres user before. If I change to the homedir of the user beforehand, server starts OK.

Environment:
-------------------
Fedora Core 20 64bit on intel laptop
postgresql93-server-9.3.2-2PGDG.f20.x86_64

What I did:
----------------
- su root (from mystandard logon user)
- su postgres
- $/usr/pgsql-9.3/bin/postgres -D /clusterfs -s &
==> CRASH

But:
- su root (from mystandard logon user)
- su postgres
- cd
- $/usr/pgsql-9.3/bin/postgres -D /clusterfs -s &
==> OK

Second problem is, I cant report this to the Redhat bugzilla via ABRT, as they don't have a "postgresql93" component.

If you need any further info please let me know.
kieran.mccusker | 24 Jan 18:01 2014
Picon

BUG #8956: fedora download does not contain pg_config

The following bug has been logged on the website:

Bug reference:      8956
Logged by:          Kieran
Email address:      kieran.mccusker <at> gmail.com
PostgreSQL version: 9.3.2
Operating system:   Linux fedora 20
Description:        

Sorry if this is the wrong place. I was trying to install file_textarray_fdw
but it is not in the fedora contrib rpm so I tried to install it from github
but found that pg_config is not in the client programs. Is this by intent?

Thanks

Kieran

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Moshe Jacobson | 24 Jan 17:40 2014

Rounding error on extract(epoch ..)

It seems that extract(epoch ..) does not preserve the entire precision of the passed-in timestamp. In the following example, it rounds to the nearest 0.00001:

> select extract(epoch from '2014-01-21 16:10:30.403777'::timestamp); date_part ------------------ 1390320630.40378 (1 row)

Just to be sure, I convert this value back to timestamp and it is still rounded (and incorrect):

> select to_timestamp(1390320630.40378); to_timestamp ------------------------------ 2014-01-21 11:10:30.40378-05 (1 row)

Oddly, when the result of extract(epoch) is passed directly back into to_timestamp, the result is correct!

> select to_timestamp(extract(epoch from '2014-01-21 16:10:30.403777'::timestamp)); to_timestamp ------------------------------- 2014-01-21 11:10:30.403777-05 (1 row)

Moshe Jacobson
Manager of Systems Engineering, Nead Werx Inc.
2323 Cumberland Parkway · Suite 201 · Atlanta, GA 30339

“Quality is not an act, it is a habit.” — Aristotle

nghia.le | 24 Jan 02:03 2014

BUG #8943: optimizer appears to not be efficient when there is little data in the query results

The following bug has been logged on the website:

Bug reference:      8943
Logged by:          Nghia Le
Email address:      nghia.le <at> postano.com
PostgreSQL version: 9.3.1
Operating system:   Amazon RDS
Description:        

I think the issue is The issue is we order by post_time, scraped time. and
for the most part when there is a lot of data, the composite index created
works wonders. However when there is little data
(ie feed_id =8924 ) has about 54 items. Then it doesn't know to just use a
sequence scan, instead it uses the entire index and takes forever in doing
so.

Query Plan with Index_scan off: Limit  (cost=154847.18..154847.18 rows=1
width=684) (actual time=1.001..1.002 rows=1 loops=1)
   Output: p.id, p.guid, p.source_type, p.post_time, p.source_id, p.title,
p.url, p.picture_url, p.video_url, p.media_mime_type, p.media_height,
p.media_width, p.text, p.user_icon_url, p.user_id, p.user_displayname,
p.user_fullname, p.text_config_name, p.feed_id, p.scraped_time
   Buffers: shared hit=119 read=4
   ->  Sort  (cost=154847.18..154877.28 rows=12038 width=684) (actual
time=0.998..0.998 rows=1 loops=1)
         Output: p.id, p.guid, p.source_type, p.post_time, p.source_id,
p.title, p.url, p.picture_url, p.video_url, p.media_mime_type,
p.media_height, p.media_width, p.text, p.user_icon_url, p.user_id,
p.user_displayname, p.user_fullname, p.text_config_name, p.feed_id,
p.scraped_time
         Sort Key: p.post_time, p.scraped_time
         Sort Method: top-N heapsort  Memory: 26kB
         Buffers: shared hit=119 read=4
         ->  Nested Loop Anti Join  (cost=294.53..154786.99 rows=12038
width=684) (actual time=0.073..0.927 rows=23 loops=1)
               Output: p.id, p.guid, p.source_type, p.post_time,
p.source_id, p.title, p.url, p.picture_url, p.video_url, p.media_mime_type,
p.media_height, p.media_width, p.text, p.user_icon_url, p.user_id,
p.user_displayname, p.user_fullname, p.text_config_name, p.feed_id,
p.scraped_time
               Buffers: shared hit=119 read=4
               ->  Bitmap Heap Scan on public.post p  (cost=290.11..48544.89
rows=12586 width=684) (actual time=0.042..0.583 rows=23 loops=1)
                     Output: p.id, p.guid, p.scraped_time, p.source_type,
p.post_time, p.source_id, p.title, p.url, p.picture_url, p.video_url,
p.media_mime_type, p.media_height, p.media_width, p.text, p.user_icon_url,
p.user_id, p.user_displayname, p.user_fullname, p.text_config_name,
p.feed_id, p.tsv, p.original_source_id
                     Recheck Cond: (p.feed_id = 8924)
                     Buffers: shared hit=5 read=3
                     ->  Bitmap Index Scan on feed_id_idx 
(cost=0.00..286.96 rows=12586 width=0) (actual time=0.033..0.033 rows=23
loops=1)
                           Index Cond: (p.feed_id = 8924)
                           Buffers: shared hit=3 read=1
               ->  Bitmap Heap Scan on public.post p1  (cost=4.42..8.44
rows=1 width=8) (actual time=0.009..0.009 rows=0 loops=23)
                     Output: p1.id, p1.guid, p1.scraped_time,
p1.source_type, p1.post_time, p1.source_id, p1.title, p1.url,
p1.picture_url, p1.video_url, p1.media_mime_type, p1.media_height,
p1.media_width, p1.text, p1.user_icon_url, p1.user_id, p1.user_displayname,
p1.user_fullname, p1.text_config_name, p1.feed_id, p1.tsv,
p1.original_source_id
                     Recheck Cond: (p1.id = p.id)
                     Filter: ((p1.original_source_id IS NOT NULL) AND
((p1.source_type)::text = ANY
('{twitter_stream,twitter_search,twitter_lists,twitter_user,twitter_hashtag}'::text[])))
                     Rows Removed by Filter: 1
                     Buffers: shared hit=114 read=1
                     ->  Bitmap Index Scan on post_pkey  (cost=0.00..4.42
rows=1 width=0) (actual time=0.006..0.006 rows=1 loops=23)
                           Index Cond: (p1.id = p.id)
                           Buffers: shared hit=91 read=1
 Total runtime: 1.078 ms

---------
Query Scan with Index On:

Limit  (cost=1.13..61.38 rows=5 width=684) (actual time=0.058..0.110 rows=5
loops=1)
   Output: p.id, p.guid, p.source_type, p.post_time, p.source_id, p.title,
p.url, p.picture_url, p.video_url, p.media_mime_type, p.media_height,
p.media_width, p.text, p.user_icon_url, p.user_id, p.user_displayname,
p.user_fullname, p.text_config_name, p.feed_id, p.scraped_time
   Buffers: shared hit=28 read=3
   ->  Nested Loop Anti Join  (cost=1.13..144978.35 rows=12032 width=684)
(actual time=0.055..0.098 rows=5 loops=1)
         Output: p.id, p.guid, p.source_type, p.post_time, p.source_id,
p.title, p.url, p.picture_url, p.video_url, p.media_mime_type,
p.media_height, p.media_width, p.text, p.user_icon_url, p.user_id,
p.user_displayname, p.user_fullname, p.text_config_name, p.feed_id,
p.scraped_time
         Buffers: shared hit=28 read=3
         ->  Index Scan using feed_id_idx on public.post p 
(cost=0.56..38902.40 rows=12580 width=684) (actual time=0.028..0.043 rows=5
loops=1)
               Output: p.id, p.guid, p.scraped_time, p.source_type,
p.post_time, p.source_id, p.title, p.url, p.picture_url, p.video_url,
p.media_mime_type, p.media_height, p.media_width, p.text, p.user_icon_url,
p.user_id, p.user_displayname, p.user_fullname, p.text_config_name,
p.feed_id, p.tsv, p.original_source_id
               Index Cond: (p.feed_id = 8924)
               Buffers: shared hit=4 read=2
         ->  Index Scan using post_pkey on public.post p1  (cost=0.56..8.43
rows=1 width=8) (actual time=0.007..0.007 rows=0 loops=5)
               Output: p1.id, p1.guid, p1.scraped_time, p1.source_type,
p1.post_time, p1.source_id, p1.title, p1.url, p1.picture_url, p1.video_url,
p1.media_mime_type, p1.media_height, p1.media_width, p1.text,
p1.user_icon_url, p1.user_id, p1.user_displayname, p1.user_fullname,
p1.text_config_name, p1.feed_id, p1.tsv, p1.original_source_id
               Index Cond: (p1.id = p.id)
               Filter: ((p1.original_source_id IS NOT NULL) AND
((p1.source_type)::text = ANY
('{twitter_stream,twitter_search,twitter_lists,twitter_user,twitter_hashtag}'::text[])))
               Rows Removed by Filter: 1
               Buffers: shared hit=24 read=1
 Total runtime: 0.186 ms

original query:

explain(analyze,buffers,verbose) SELECT p.id, p.guid, source_type,
post_time, source_id, title, url, picture_url, video_url, media_mime_type,
media_height, media_width, text, user_icon_url, user_id, user_displayname,
user_fullname, text_config_name, feed_id,scraped_time AS t FROM post AS p 
LEFT JOIN location AS l on l.post_id=p.id WHERE (   feed_id =8924 )  AND NOT
EXISTS( SELECT 1 FROM post p1 WHERE p1.id=p.id AND p1.source_type
IN('twitter_stream','twitter_search', 'twitter_lists', 'twitter_user',
'twitter_hashtag') AND  p1.original_source_id IS NOT NULL)  ORDER BY
post_time DESC, scraped_time DESC LIMIT 1;

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

maxim.boguk | 24 Jan 13:37 2014
Picon

BUG #8954: Wrong text in database log during "truncate scan"

The following bug has been logged on the website:

Bug reference:      8954
Logged by:          Maksym
Email address:      maxim.boguk <at> gmail.com
PostgreSQL version: 9.2.4
Operating system:   Linux
Description:        

Hi,

If manual vacuum performed in the table and could not required lock for
truncate scan it wrote a bit misleading error message:

2014-01-24 16:19:11 MSK 7134 cron.app <at> *** from [local] [vxid:297/14422865
txid:2179070646] [VACUUM] LOG:  automatic vacuum of table
"hh_data.public.vacancy_response": could not (re)acquire exclusive lock for
truncate scan
2014-01-24 16:19:11 MSK 7134 cron.app <at> *** from [local] [vxid:297/14422865
txid:2179070646] [VACUUM] STATEMENT:  VACUUM public.vacancy_response
2014-01-24 16:19:11 MSK 7134 cron.app <at> *** from [local] [vxid:297/0 txid:0]
[VACUUM] LOG:  duration: 589175.931 ms  statement: VACUUM
public.vacancy_response

Please note "automatic vacuum of table".
It was definitely not autovacuum but manual vacuum run.

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Sergei.Agalakov | 23 Jan 18:27 2014
Picon

BUG #8939: Cast to range type fails due to whitespaces in the text string

The following bug has been logged on the website:

Bug reference:      8939
Logged by:          Sergei Agalakov
Email address:      Sergei.Agalakov <at> gmail.com
PostgreSQL version: 9.3.2
Operating system:   Windows
Description:        

According to documentation
http://www.postgresql.org/docs/9.3/interactive/rangetypes.html paragraph
8.17.5. Range Input/Output
"Whitespace is allowed before and after the range value, but any whitespace
between the parentheses or brackets is taken as part of the lower or upper
bound value. (Depending on the element type, it might or might not be
significant.)"
The real behavior is different for bounded and unbounded ranges:
SELECT '[10,)'::int4range; -- OK
SELECT '[10, )'::int4range; -- Fails
SELECT '[10,12 )'::int4range; -- OK
SELECT '(, 10]'::int4range; -- OK
SELECT '(,10 )'::int4range; -- OK
SELECT '( ,10)'::int4range; -- Fails
SELECT '( 8,10)'::int4range; -- OK

Constructor works predictably all the time trimming whitespaces:
SELECT int4range( 10 , null , '[)' ); -- OK

I would expect that for range types where white spaces can't be significant
for evaluating elements they would be trimmed before evaluating a SQL
statement with casting to a range type. Currently an implementation of
casting to range types is too peaky to the exact syntax of the text string.

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

marcusoverheu | 23 Jan 10:47 2014
Picon

BUG #8934: value != ANY (uuid[]) AND expr does not work in all cases

The following bug has been logged on the website:

Bug reference:      8934
Logged by:          Marcus Overheu
Email address:      marcusoverheu <at> gmail.com
PostgreSQL version: 9.2.4
Operating system:   ubuntu 12.04.3
Description:        

if you have a table ala.
 create table uuidtest( aint integer, auuid uuid);

 insert into uuidtest VALUES (1, 'ab15ea27-8768-4cbe-a1fb-02a2a949a34d'),
(1, '1c9e9ce5-d4d8-4a14-9eaf-929f27e09ba2');

then 
 select * from uuidtest where auuid = ANY
(ARRAY['1c9e9ce5-d4d8-4a14-9eaf-929f27e09ba2','ab15ea27-8768-4cbe-a1fb-02a2a949a34d']::uuid[])
and aint = 1;
return 2 result 
which would mean that if using not in 

select * from uuidtest where auuid != ANY
(ARRAY['1c9e9ce5-d4d8-4a14-9eaf-929f27e09ba2','ab15ea27-8768-4cbe-a1fb-02a2a949a34d']::uuid[])
and aint = 1;

would return zero, but it stills returns 2 results. 

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

oren | 23 Jan 14:48 2014

BUG #8936: cluster initialisation failed

The following bug has been logged on the website:

Bug reference:      8936
Logged by:          Oren
Email address:      oren <at> vnt-software.com
PostgreSQL version: 9.1.5
Operating system:   Windows 8, 8.1, 2012
Description:        

Hi,

When trying to install postgre using domain user while another user with
same name is configured locally(!) the initdb.exe failed.
(the initcluster.vbs running initdb.exe)

It seems like that the installer gives permission on data folder to the
local user instead the domain user which I logged in with.

Checked with 9.1.5 and 9.1.11 versions.

More Details:

Description:	initdb - initialize a new database cluster
Company:	PostgreSQL Global Development Group
Name:	initdb.exe
Version:	9.1.5
Path:	C:\Program Files\PostgreSQL\9.1\bin\initdb.exe
Command Line:	"C:\Program Files\PostgreSQL\9.1\bin\initdb.exe" --pwfile
"C:\Program Files\PostgreSQL\9.1\radD2D3C.tmp" --encoding=UTF-8 -A md5 -U
postgres -D "C:\Program Files\PostgreSQL\9.1\data" 

Event Class:	File System
Operation:	CreateFile
Result:	ACCESS DENIED
Path:	C:\Program Files\PostgreSQL\9.1\data
TID:	304
Duration:	0.0002397
Desired Access:	Write Attributes, Synchronize
Disposition:	Open
Options:	Synchronous IO Non-Alert, Open Reparse Point
Attributes:	n/a
ShareMode:	Read, Write, Delete
AllocationSize:	n/a

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Elyor Latipov | 23 Jan 08:50 2014

If operator statements

Don't work this statement:

IF NOT EXISTS (select TRUE from information_schema.constraint_column_usage where constraint_name='projects_pkey')
THEN
    ALTER TABLE projects ADD CONSTRAINT projects_pkey PRIMARY KEY(id);
END IF; 


__________________
Best regards,
Elyor.
rahuljadhav119 | 23 Jan 07:47 2014
Picon

BUG #8929: Drupal Website not working on Postgres 9.2

The following bug has been logged on the website:

Bug reference:      8929
Logged by:          Jadhav Rahul
Email address:      rahuljadhav119 <at> gmail.com
PostgreSQL version: 8.4.0
Operating system:   Windows 7
Description:        

I developed website in Drupal 6 with postgres 8.4 now i have to use postgres
version 9.2.

I had taken backup in plan format from 8.4 to restore in 9.2.

after all restore process when I loads website in Browser it shows blank
page only.

Since my website is running fine with postgres 8.4

--

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Mikhail Pryakhin | 21 Jan 22:00 2014
Picon

Socket unexpectedly closed by a frontend while communicating with the backend


Hello guys! I'm a little bit confused with the following case.

I've got a Postgressql server running on host A, and a java based client running on host B. The client uses org.postgresql.Driver JDBC driver (version 9.1-901.jdbc3).

sometimes while executing long running stored procedure I get exception "java.net.SocketException: Socket closed". I'm using org.apache.commons.dbcp.BasicDataSource for retrieving

connections. DBCP pool is configured with default options.

 

I got tcp dump in order to figure out on which side (client or server) socket is being closed;

 

Here is what I've got:

 

1.   Client B sends a test query message when tries to borrow connection from dbcp pool ("Select 1")

2.   Server A sends successful response back (Type: Command completion, Ready for query)

3.   Client B sends ACK message in response on server A response (see the item 2).

4.   Client B sends query message to the server A.

5.   Server A sends ACK message in response on client Query message (see the item 4).

6.   Client B sends terminating message (Type : Termination) after some time passed (from 3 to 10 or sometimes even more minutes).

7    Client B sends FIN ACK message to the server.

8.   Server A sends back ACK on termination message.

9.   Server A sends ACK on (FIN, ACK) message (item 7).

10.  Server A sends back a response on the client query (from item 4) Type: Row description Columns: 40.

11.  Client B sends RST message (reset).

12.  Server A continues sending response on the query Type: Data row Length: 438 Columns 40 and so on.

13   Client B sends RST message (reset) again.

14.  Server A continues sending response on the query Type: Data row Length: 438 Columns 40 and so on.

15.  Client B sends RST message (reset).

 

After that communication seems to be finished.

 

After the item 6, in my client logs I got Exception like the following:

 

Caused by: java.net.SocketException: Socket closed

        at java.net.SocketInputStream.socketRead0(Native Method)

        at java.net.SocketInputStream.read(SocketInputStream.java:152)

        at java.net.SocketInputStream.read(SocketInputStream.java:122)

        at org.postgresql.core.VisibleBufferedInputStream.readMore(VisibleBufferedInputStream.java:145)

        at org.postgresql.core.VisibleBufferedInputStream.ensureBytes(VisibleBufferedInputStream.java:114)

        at org.postgresql.core.VisibleBufferedInputStream.read(VisibleBufferedInputStream.java:73)

        at org.postgresql.core.PGStream.ReceiveChar(PGStream.java:274)

        at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1661)

        at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257)

 

Could you please help me to figure out the reason of such a failure. (This bug happens once per 10 successful cases.)

 

Thanks in advance!


Best regards, Mike Pryakhin.


Gmane