Mehran Ziadloo | 24 Jul 05:24 2016
Picon

A simple extension immitating pg_notify

<!-- .hmmessage P { margin:0px; padding:0px } body.hmmessage { font-size: 12pt; font-family:Calibri } -->
Hello everyone,

I'm looking for a way to send notifications from within one database to
another. As if pg_notify accepted a database name, or even to all of the
databases. So far I could not find a way to do so. Then someone told me that I
need to implement this feature myself and add it PostgreSQL as an extension.

Being new to extension development, I have no idea where to start. Can someone
please guide me to the right direction? I'm sure implementing such a function
is pretty simple but only if you know where to start, which I don't!

Regards,
Mehran
John R Pierce | 23 Jul 02:24 2016

yum repo, pgloader

this question is mostly for devrim, who maintains the RHEL/CentOS/Fedora 
PGDG yum repositories...

I note that pgloader is in the yum repo for Postgresql 9.2 on Centos 6, 
but not 9.3, 9.4, 9.5, wondering if there's a reason it was dropped?

-- 
john r pierce, recycling bits in santa cruz

--

-- 
Sent via pgsql-general mailing list (pgsql-general <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Attacker One | 22 Jul 10:35 2016

d88a45e680327e0b22a34020d8f78252 - Re: MongoDB 3.2 beating Postgres 9.5.1?

Hi,

I recently test YCSB benchmark too.

But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.

Paul said that making table with no logging option improved the performance,

and it might be equal to MongoDB's behavior.

But in MongoDB documentation, it writes journal log too.

So I think turning off no logging option is not fair.

Am I wrong about MongoDB's behavior?

 

 


                                                                                                                                                       

(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim <at> bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

 

2016-03-19 5:05 GMT+09:00 <pbj <at> cmicdo.com>:

 

On Tuesday, March 15, 2016 7:39 PM, "pbj <at> cmicdo.com" <pbj <at> cmicdo.com> wrote:

 

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6 <at> gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....

 

 

Attacker One | 22 Jul 10:35 2016

d88a45e680327e0b22a34020d8f78252 - Re: MongoDB 3.2 beating Postgres 9.5.1?

Hi,

I recently test YCSB benchmark too.

But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.

Paul said that making table with no logging option improved the performance,

and it might be equal to MongoDB's behavior.

But in MongoDB documentation, it writes journal log too.

So I think turning off no logging option is not fair.

Am I wrong about MongoDB's behavior?

 

 


                                                                                                                                                       

(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim <at> bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

 

2016-03-19 5:05 GMT+09:00 <pbj <at> cmicdo.com>:

 

On Tuesday, March 15, 2016 7:39 PM, "pbj <at> cmicdo.com" <pbj <at> cmicdo.com> wrote:

 

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6 <at> gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....

 

 

Attacker One | 22 Jul 10:34 2016

d88a45e680327e0b22a34020d8f78252 - Re: MongoDB 3.2 beating Postgres 9.5.1?

Hi,

I recently test YCSB benchmark too.

But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.

Paul said that making table with no logging option improved the performance,

and it might be equal to MongoDB's behavior.

But in MongoDB documentation, it writes journal log too.

So I think turning off no logging option is not fair.

Am I wrong about MongoDB's behavior?

 

 


                                                                                                                                                       

(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim <at> bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

 

2016-03-19 5:05 GMT+09:00 <pbj <at> cmicdo.com>:

 

On Tuesday, March 15, 2016 7:39 PM, "pbj <at> cmicdo.com" <pbj <at> cmicdo.com> wrote:

 

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6 <at> gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....

 

 

Attacker One | 22 Jul 10:34 2016

d88a45e680327e0b22a34020d8f78252 - Re: MongoDB 3.2 beating Postgres 9.5.1?

Hi,

I recently test YCSB benchmark too.

But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.

Paul said that making table with no logging option improved the performance,

and it might be equal to MongoDB's behavior.

But in MongoDB documentation, it writes journal log too.

So I think turning off no logging option is not fair.

Am I wrong about MongoDB's behavior?

 

 


                                                                                                                                                       

(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim <at> bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

 

2016-03-19 5:05 GMT+09:00 <pbj <at> cmicdo.com>:

 

On Tuesday, March 15, 2016 7:39 PM, "pbj <at> cmicdo.com" <pbj <at> cmicdo.com> wrote:

 

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6 <at> gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....

 

 

Attacker One | 22 Jul 10:34 2016

d88a45e680327e0b22a34020d8f78252 - Re: MongoDB 3.2 beating Postgres 9.5.1?

Hi,

I recently test YCSB benchmark too.

But contrary to my expectation, PG (9.5) is slower than MongoDB 3.2.

Paul said that making table with no logging option improved the performance,

and it might be equal to MongoDB's behavior.

But in MongoDB documentation, it writes journal log too.

So I think turning off no logging option is not fair.

Am I wrong about MongoDB's behavior?

 

 


                                                                                                                                                       

(C)Bitnine, Kisung Kim, Ph.D
https://sites.google.com/site/kisungresearch/
E-mail : kskim <at> bitnine.net
Office phone : 070-4800-5890, 408-606-8602
US Mobile phone : 408-805-2192

 

2016-03-19 5:05 GMT+09:00 <pbj <at> cmicdo.com>:

 

On Tuesday, March 15, 2016 7:39 PM, "pbj <at> cmicdo.com" <pbj <at> cmicdo.com> wrote:

 

 > Your results are close enough to mine, I think, to prove the point.
 > And, I agree that the EDB benchmark is not necessary reflective of a
 > real-world scenario.
 >
 > However, the cache I'm referring to is PG's shared_buffer cache.
 > You can see the first run of the select causing a lot of disk reads.
 > The second identical run, reads purely from shared_buffers.
 >
 > What I don't understand is, why does a slightly different select from
 > the *same* table during the same session cause shared_buffers to be
 > blown out and re-read??
 >
 > I will see if I can try YCSB next week (I'm in workshops all week...)
 >
 > Thanks!

I was able to try YCSB today on both PG 9.5.1 and Mongo 3.2.  At first, PG
was running 4 times slower than Mongo.  Then I remembered about unlogged
tables (which I think is the way Mongo is all the time.), and remade
the PG table as UNLOGGED.  In a 50/50 read/update test over 1M records,
PG ran in 0.62 of the time of Mongo.

PG Load:
--------
[OVERALL], RunTime(ms), 104507.0
[OVERALL], Throughput(ops/sec), 9568.737022400413
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 293.0
[CLEANUP], MinLatency(us), 293.0
[CLEANUP], MaxLatency(us), 293.0
[CLEANUP], 95thPercentileLatency(us), 293.0
[CLEANUP], 99thPercentileLatency(us), 293.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 101.329235
[INSERT], MinLatency(us), 88.0
[INSERT], MaxLatency(us), 252543.0
[INSERT], 95thPercentileLatency(us), 121.0
[INSERT], 99thPercentileLatency(us), 141.0
[INSERT], Return=OK, 1000000
 
PG Run:
-------
[OVERALL], RunTime(ms), 92763.0
[OVERALL], Throughput(ops/sec), 10780.16019318047
[READ], Operations, 499922.0
[READ], AverageLatency(us), 79.1722428698877
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 19935.0
[READ], 95thPercentileLatency(us), 94.0
[READ], 99thPercentileLatency(us), 112.0
[READ], Return=OK, 499922
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 222.0
[CLEANUP], MinLatency(us), 222.0
[CLEANUP], MaxLatency(us), 222.0
[CLEANUP], 95thPercentileLatency(us), 222.0
[CLEANUP], 99thPercentileLatency(us), 222.0
[UPDATE], Operations, 500078.0
[UPDATE], AverageLatency(us), 98.96430156895525
[UPDATE], MinLatency(us), 83.0
[UPDATE], MaxLatency(us), 26655.0
[UPDATE], 95thPercentileLatency(us), 127.0
[UPDATE], 99thPercentileLatency(us), 158.0
[UPDATE], Return=OK, 500078
 
Mongo Load:
-----------
[OVERALL], RunTime(ms), 133308.0
[OVERALL], Throughput(ops/sec), 7501.425270801452
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 1822.0
[CLEANUP], MinLatency(us), 1822.0
[CLEANUP], MaxLatency(us), 1822.0
[CLEANUP], 95thPercentileLatency(us), 1822.0
[CLEANUP], 99thPercentileLatency(us), 1822.0
[INSERT], Operations, 1000000.0
[INSERT], AverageLatency(us), 130.830678
[INSERT], MinLatency(us), 90.0
[INSERT], MaxLatency(us), 7147519.0
[INSERT], 95thPercentileLatency(us), 159.0
[INSERT], 99thPercentileLatency(us), 226.0
[INSERT], Return=OK, 1000000
 
Mongo Run:
---------
[OVERALL], RunTime(ms), 149150.0
[OVERALL], Throughput(ops/sec), 6704.65973851827
[READ], Operations, 500837.0
[READ], AverageLatency(us), 98.13153980237084
[READ], MinLatency(us), 69.0
[READ], MaxLatency(us), 28271.0
[READ], 95thPercentileLatency(us), 166.0
[READ], 99thPercentileLatency(us), 186.0
[READ], Return=OK, 500837
[CLEANUP], Operations, 1.0
[CLEANUP], AverageLatency(us), 2387.0
[CLEANUP], MinLatency(us), 2386.0
[CLEANUP], MaxLatency(us), 2387.0
[CLEANUP], 95thPercentileLatency(us), 2387.0
[CLEANUP], 99thPercentileLatency(us), 2387.0
[UPDATE], Operations, 499163.0
[UPDATE], AverageLatency(us), 195.21505600375028
[UPDATE], MinLatency(us), 118.0
[UPDATE], MaxLatency(us), 4513791.0
[UPDATE], 95thPercentileLatency(us), 211.0
[UPDATE], 99thPercentileLatency(us), 252.0
[UPDATE], Return=OK, 499163


 >
 >
 > On Monday, March 14, 2016 3:34 AM, Dmitry Dolgov <9erthalion6 <at> gmail.com> wrote:
 >
 >
 > Hi, Paul
 >
 > I agree with Oleg, EDB benchmarks are strange sometimes. I did the same benchmarks several months ago. I never noticed the cache influence back then, so I tried to reproduce your situation now (on a 5*10^6 records although). I started to play with db cache (using `echo 3 > /proc/sys/vm/drop_cache`), and I see difference in time execution for two subsequent queries, but `explain` info are almost identical, e.g. `shared hit & read`:
 >
 > ....

 

 

Philippe BEAUDOIN | 22 Jul 11:27 2016
Picon

pg_dumping extensions having sequences with 9.6beta3

Hi all,

I am currently playing with extensions. And I found a strange behaviour change with 9.6beta2 and 3 when pg_dumping a database with an extension having sequences. This looks like a bug, ... unless I did something wrong.

Here is a test case (a simple linux shell script, that can be easily customized to reproduce).


# pg_dump issue in postgres 9.6beta2 when dumping sequences linked to extensions
#
export PGBIN="/usr/local/pg96beta3/bin"
#export PGBIN="/usr/local/pg952/bin"
export EXTDIR="/tmp"
export PGDIR="/usr/local/pg96beta3/share/postgresql/extension"
#export PGDIR="/usr/local/pg952/share/postgresql/extension"
export PGPORT=5496
#export PGPORT=5495
export PGDATABASE='postgres'

echo "##################################################################################################################"
echo " "
echo "psql: prepare the initial environment: 1 schema + 2 tables with 1 serial column in each"
echo "---------------------------------------------------------------------------------------"
$PGBIN/psql -a <<*END*
select version();
-- cleanup
DROP EXTENSION IF EXISTS myextension;
DROP SCHEMA IF EXISTS myextension CASCADE;
-- create
CREATE SCHEMA myextension;
CREATE TABLE myextension.t1 (c1 SERIAL);
CREATE TABLE myextension.t2 (c1 SERIAL);
*END*

echo "create first files for extension management"
echo "-------------------------------------------"
cat >$EXTDIR/myextension.control <<*END*
default_version        = '1'
comment                = 'test'
directory            = '$EXTDIR'
superuser            = true
schema                = 'myextension'
relocatable            = false
*END*
sudo ln -s $EXTDIR/myextension.control $PGDIR/myextension.control

cat >$EXTDIR/myextension--unpackaged--1.sql <<*END*
-- for t1, the table and the sequence is added to the extension
ALTER EXTENSION myextension ADD TABLE myextension.t1;
ALTER EXTENSION myextension ADD SEQUENCE myextension.t1_c1_seq;
-- for t2, the associated sequence is not added to the extension for now
ALTER EXTENSION myextension ADD TABLE myextension.t2;
-- create a new t3 table
CREATE TABLE t3 (c1 SERIAL);
*END*

echo "psql: create the extension from unpackaged"
echo "------------------------------------------"
$PGBIN/psql -a <<*END*
-- create
CREATE EXTENSION myextension FROM unpackaged;
-- check
\dx
SELECT classid, c1.relname, objid, c2.relname, c2.relkind, refclassid, r.relname, refobjid
  FROM pg_depend, pg_class c1, pg_class r, pg_class c2
  WHERE deptype = 'e'
    AND classid = c1.oid AND refclassid = r.oid AND objid = c2.oid
    AND c1.relname = 'pg_class';
*END*

echo " "
echo "So we now have 3 tables having a serial column:"
echo " - t1 explicitely added to the extension, with its sequence"
echo " - t2 explicitely added to the extension, but without its sequence"
echo " - t3 directly created inside the extensione"
echo " "

echo "sequences dumped by pg_dump (pg_dump |grep 'CREATE SEQUENCE')"
echo "---------------------------"
$PGBIN/pg_dump |grep 'CREATE SEQUENCE'

echo " "
echo "=> as expected, with latest minor versions of postgres 9.1 to 9.5, the sequences associated to the t1.c1 and t1.c3 columns are not dumped,"
echo "   while the sequence associated to t2.c1 is dumped."
echo "=> with 9.6beta3 (as with beta2), the 3 sequences are dumped."
echo " "

echo "cleanup"
echo "-------"
$PGBIN/psql <<*END*
DROP EXTENSION IF EXISTS myextension;
DROP SCHEMA IF EXISTS myextension CASCADE;
*END*

sudo rm $PGDIR/myextension.control
rm $EXTDIR/myextension*


And its output result:


##################################################################################################################
 
psql: prepare the initial environment: 1 schema + 2 tables with 1 serial column in each
---------------------------------------------------------------------------------------
select version();
                                                 version                                                
---------------------------------------------------------------------------------------------------------
 PostgreSQL 9.6beta3 on i686-pc-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4, 32-bit
(1 row)

-- cleanup
DROP EXTENSION IF EXISTS myextension;
NOTICE:  extension "myextension" does not exist, skipping
DROP EXTENSION
DROP SCHEMA IF EXISTS myextension CASCADE;
NOTICE:  schema "myextension" does not exist, skipping
DROP SCHEMA
-- create
CREATE SCHEMA myextension;
CREATE SCHEMA
CREATE TABLE myextension.t1 (c1 SERIAL);
CREATE TABLE
CREATE TABLE myextension.t2 (c1 SERIAL);
CREATE TABLE
create first files for extension management
-------------------------------------------
psql: create the extension from unpackaged
------------------------------------------
-- create
CREATE EXTENSION myextension FROM unpackaged;
CREATE EXTENSION
-- check
\dx
                    List of installed extensions
    Name     | Version |   Schema    |         Description         
-------------+---------+-------------+------------------------------
 myextension | 1       | myextension | test
 plpgsql     | 1.0     | pg_catalog  | PL/pgSQL procedural language
(2 rows)

SELECT classid, c1.relname, objid, c2.relname, c2.relkind, refclassid, r.relname, refobjid
  FROM pg_depend, pg_class c1, pg_class r, pg_class c2
  WHERE deptype = 'e'
    AND classid = c1.oid AND refclassid = r.oid AND objid = c2.oid
    AND c1.relname = 'pg_class';
 classid | relname  | objid |  relname  | relkind | refclassid |   relname    | refobjid
---------+----------+-------+-----------+---------+------------+--------------+----------
    1259 | pg_class | 32216 | t1        | r       |       3079 | pg_extension |    32226
    1259 | pg_class | 32214 | t1_c1_seq | S       |       3079 | pg_extension |    32226
    1259 | pg_class | 32222 | t2        | r       |       3079 | pg_extension |    32226
    1259 | pg_class | 32227 | t3_c1_seq | S       |       3079 | pg_extension |    32226
    1259 | pg_class | 32229 | t3        | r       |       3079 | pg_extension |    32226
(5 rows)

 
So we now have 3 tables having a serial column:
 - t1 explicitely added to the extension, with its sequence
 - t2 explicitely added to the extension, but without its sequence
 - t3 directly created inside the extensione
 
sequences dumped by pg_dump (pg_dump |grep 'CREATE SEQUENCE')
---------------------------
CREATE SEQUENCE t1_c1_seq
CREATE SEQUENCE t2_c1_seq
CREATE SEQUENCE t3_c1_seq
 
=> as expected, with latest minor versions of postgres 9.1 to 9.5, the sequences associated to the t1.c1 and t1.c3 columns are not dumped,
   while the sequence associated to t2.c1 is dumped.
=> with 9.6beta3 (as with beta2), the 3 sequences are dumped.
 
cleanup
-------
DROP EXTENSION
DROP SCHEMA


I will be off during the 2 coming weeks. So I will only see any reply to this thread ... soon.

Best regards.

Philippe Beaudoin.


Andrew Geery | 21 Jul 21:49 2016
Picon

gin index operator class functions in sql?

For a custom type to be used in a btree index, it is possible to create a function in SQL (as opposed to C) that is used in the create operator class statement to specify how equality is defined.  A good example of this is presented inhttp://www.cybertec.at/2013/11/operator-classes-explained/.

Is it possible to define functions in SQL (as opposed to C) to do the same thing -- create an operator class -- for a gin index?  According to the documentation, there are four functions that would need to be defined: overlap, contains, is contained by and equal [1].  If so, what would the SQL function signature be?

Thanks
Andrew

Aislan Luiz Wendling | 20 Jul 19:17 2016
Picon
Gravatar

Array value from table as parameter

Hi,

I need to pass an array as a parameter in a function, but it is a part of a trigger that get a "new" value.


I've tested it like 

select function_x(10000,555555,array[['xxxxx'],['yyyyy']]) 
or
select function_x(10000,555555,array[['xxxxx','yyyyy']]) 

and it worked.


But if I use 

select function_x(10000,555555,new.situations)

it doesn't work, as the value comes like this: {"xxxxx","yyyyy"}


I couldn't find a function that converts {} values back to [] without treat it as a string and use replace, what I think that is not the ideal solution because it may can't satisfy more complex arrays.


Regards,

Aislan Luiz Wendling
Pavel Suderevsky | 20 Jul 19:30 2016
Picon

9.5: pg_stat_statement and pgbench execution time discrepancies

Hi,

During performance tests it was mentioned that pgbench common results are being different from what pg_stat_statements provides.

PostgreSQL version 9.5.3, centos 7.2. First run after pg_stat_extension created and first pgbench run.

pgbench -c 30 -T 300 -U postgres -P 10 -r -v
statement latencies in milliseconds:
        0.003431        \set nbranches 1 * :scale
        0.001046        \set ntellers 10 * :scale
        0.000891        \set naccounts 100000 * :scale
        0.001356        \setrandom aid 1 :naccounts
        0.000938        \setrandom bid 1 :nbranches
        0.000879        \setrandom tid 1 :ntellers
        0.000974        \setrandom delta -5000 5000
        0.102258        BEGIN;
        0.228744        UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
        0.156087        SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
        17.715645       UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
        6.222220        UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
        0.294592        INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
        0.502843        END;

pg_stat_statements 
Example 1
query               | UPDATE pgbench_accounts SET abalance = abalance + ? WHERE aid = ?;
calls               | 118804
total_time          | 5944.19700000145
min_time            | 0.017
max_time            | 146.501
mean_time           | 0.0500336436483624
stddev_time         | 0.860838186600729
rows                | 118804
Example 2
query               | INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (?, ?, ?, ?, CURRENT_TIMESTAMP);
calls               | 118804
total_time          | 16533.2040000006
min_time            | 0.075
max_time            | 1.583
mean_time           | 0.139163698191979
stddev_time         | 0.0342061695940176
rows                | 118804
 
Could anybody explain why results are different? Where is an inaccuracy - in pg_stat_statements or pgbench?

Thanks in advance.


Gmane