Dave Page | 23 Jun 16:26 2016

PostgreSQL 9.6 Beta 2 Released

The PostgreSQL Global Development Group announces today that the
second beta release of PostgreSQL 9.6 is available for download. This
release contains previews of all of the features which will be
available in the final release of version 9.6, including fixes to many
of the issues found in the first beta.  Users are encouraged to begin
testing their applications against 9.6 beta 2.

Changes Since Beta1
-------------------

Our users and contributors reported bugs against 9.6 beta 1, and some
of them have been fixed in this release.  This includes multiple fixes
for failure and performance issues in parallel query.  We urge our
community to re-test to ensure that these bugs are actually fixed,
including:

* update most contrib extensions for parallel query
* two fixes for pg_trgm (trigram bugs
* rewrite code to estimate join sizes for better performance
* correct handling of argument and result datatypes for partial aggregation
* fix lazy_scan_heap so that it won't mark pages all-frozen too soon
* mark additional functions as parallel-unsafe
* check PlaceHolderVars before pushing down a join in postgres_fdw
* improve the situation for parallel query versus temp relations
* don't generate parallel paths for rels with parallel-restricted outputs
* make psql_crosstab plans more stable
* finish loose ends for SQL ACCESS METHOD objects, including pg_dump
* stop the executor if no more tuples can be sent from worker to leader
* several pg_upgrade fixes to support new features
* fix regression tests for phrase search
(Continue reading)

Jernigan, Kevin | 22 Jun 01:33 2016
Picon

Announcement: Amazon RDS for PostgreSQL now supports cross-region read replicas

You can now quickly create cross-region read replicas for your unencrypted Amazon RDS for PostgreSQL database instances with just a few clicks on the AWS Management Console. You can use this feature to reduce read latency for your customers in different geographic locations, to create a backup of your primary database for disaster recovery purposes, or quickly migrate your database to a different AWS Region.

Disaster Recovery: You can create cross-region read replicas of your primary database instance to have a disaster recovery solution. If your primary region faces a disruption, you can promote the replica to a master and keep your business operational.

Scaling: You can use cross-region read replicas to support read queries from your workloads across various geographic locations. This will reduce latency by serving your customers from a database that is close to them.

Cross-region Migration: If you would like to migrate your database instance quickly to another AWS region, you may do so by using cross-region replication. Simply create a replica in your target region, and once it is ready, promote it to master and point your application to it.

This feature is available for all RDS PostgreSQL databases that are version 9.5.2 or 9.4.7 and higher. To create a cross-region replica of a database instance operating on an older version, you can upgrade to a supported version by performing a database version upgrade. To learn more about cross-region replication for RDS PostgreSQL, please refer to the RDS documentation.

 

Yeoh Ray Mond | 21 Jun 17:38 2016

DB Doc 3.1 released

DB Doc 3.1 has been released and is available for immediate download.

DB Doc is a PostgreSQL database documentation tool that analyzes your 
database schema and generates the schema documentation in PDF, HTML, 
Word, and CHM formats.   More details here - 
http://www.yohz.com/dbdoc_details.htm

-- 
Yeoh Ray Mond
Associate, Yohz Software
http://www.yohz.com

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Jim Mlodgenski | 22 Jun 13:02 2016
Picon

Registration open for PGDay Philly 2016

PGDay Philly 2016 is a 1 day 1 track event with 7 talks from PostgreSQL Core Team members and other Contributors. It will be held on July 21 at Huntsman Hall at the Wharton School and is co-located with this year's DjangoCon https://2016.djangocon.us/.

See the local Philly PostgreSQL User Group Meetup page for more information.


Hiroshi Saito | 17 Jun 16:35 2016
Picon

psqlODBC 09.05.0300 Released

Hi, all.

We are release of psqlODBC 09.05.0300. this is a some bugfix release.
please see the notes at:
https://odbc.postgresql.org/docs/release.html

psqlODBC may be downloaded from in source, Windows Installer,
merge module, and basic zip file formats.

New Windows installer installs both 32-bit and 64-bit driver at once
on 64-bit OS. Both drivers may be needed on 64-bit OS because 32-bit
applications require 32-bit driver and 64-bit applications require
64-bit driver.

Please post any bug reports to the  mailing list.

I'd like to take this opportunity to thank all those involved with the
development, testing and bug fixing of the updated driver.

We are grateful to the help of many peoples. Thanks!

-- 
psqlODBC team.
email:   pgsql-odbc <at> postgresql.org
website: https://odbc.postgresql.org/

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Bo Peng | 17 Jun 14:02 2016
Picon

pgpool-II 3.5.3, 3.4.7, 3.3.11, 3.2.16, 3.1.19 and pgpoolAdmin 3.5.3 released

Hi,

Pgpool Global Development Group is pleased to announce the availability
of pgpool-II 3.5.3, 3.4.7, 3.3.11, 3.2.16, 3.1.19, and pgpoolAdmin 3.5.3.
These are the latest  stable minor versions of each major versions of pgpool-II.

Pgpool-II is a tool to add usefull features to PostgreSQL, including
connection pooling, load balancing, automatic fail over and more.

You can download them from:
http://pgpool.net/mediawiki/index.php/Downloads

                        3.5.3 (ekieboshi) 2016/06/17

* Version 3.5.3

    This is a bugfix release against pgpool-II 3.5.2.

    __________________________________________________________________

* New features

    - Allow to access to pgpool while doing health checking (Tatsuo Ishii)

      Currently any attempt to connect to pgpool fails if pgpool is doing
      health check against failed node even if fail_over_on_backend_error is
      off because pgpool child first tries to connect to all backend
      including the failed one and exits if it fails to connect to a backend
      (of course it fails). This is a temporary situation and will be
      resolved before pgpool executes failover. However if the health check
      is retrying, the temporary situation keeps longer depending on the
      setting of health_check_max_retries and health_check_retry_delay. This
      is not good. Attached patch tries to mitigate the problem:

      - When an attempt to connect to backend fails, give up connecting to
        the failed node and skip to other node, rather than exiting the
        process if operating in streaming replication mode and the node is
        not primary node.

      - Mark the local status of the failed node to "down".

      - This will let the primary node be selected as a load balance node
        and every queries will be sent to the primary node. If there's other
        healthy standby nodes, one of them will be chosen as the load
        balance node.

      - After the session is over, the child process will suicide to not
        retain the local status.

      Per [pgpool-hackers: 1531].

* Bug fixes

    - Fix is_set_transaction_serializable() when
      SET default_transaction_isolation TO 'serializable'. (Bo Peng)

      SET default_transaction_isolation TO 'serializable' is sent to
      not only primary but also to standby server in streaming replication mode,
      and this causes an error. Fix is, in streaming replication mode,
      SET default_transaction_isolation TO 'serializable' is sent only to the
      primary server.

      See bug 191 for related info.

    - Fix Chinese documetation bug about raw mode (Yugo Nagata, Bo Peng)
      Connection pool is avalilable in raw mode.

    - Fix confusing comments in pgpool.conf (Tatsuo Ishii)

    - Fix extended protocol handling in raw mode (Tatsuo Ishii)

      Bug152 reveals that extended protocol handling in raw mode (actually
      other than in stream mode) was wrong in Describe() and Close().
      Unlike stream mode, they should wait for backend response.

      See bug 152 for related info.

    - Permit pgpool to support multiple SSL cipher protocols (Muhammad Usama)

      Currently TLSv1_method() is used to initialize the SSL context, that puts an
      unnecessary limitation to allow only TLSv1 protocol for SSL communication.
      While postgreSQL supports other ciphers protocols as well. The commit changes
      the above and initializes the SSLSession using the SSLv23_method()
      (same is also used by PostgreSQL). Because it can negotiate the use of the
      highest mutually supported protocol version and remove the limitation of one
      specific protocol version.

    - If statement timeout is enabled on backend and do_query() sends a (Tatsuo Ishii)
      query to primary node, and all of following user queries are sent to
      standby, it is possible that the next command, for example END, could
      cause a statement timeout error on the primary, and a kind mismatch
      error on pgpool-II is raised.

      This fix tries to mitigate the problem by sending sync message instead
      of flush message in do_query(), expecting that the sync message reset
      the statement timeout timer if we are in an explicit transaction. We
      cannot use this technique for implicit transaction case, because the
      sync message removes the unnamed portal if there's any.

      Plus, pg_stat_statement will no longer show the query issued by
      do_query() as "running".

      See bug 194 for related info.

    - Deal with the case when the primary is not node 0 in streaming replication mode.
      (Tatsuo Ishii)

      http://www.pgpool.net/mantisbt/view.php?id=194#c837 reported that if
      primary is not node 0, then statement timeout could occur even after
      bug194-3.3.diff was applied. After some investigation, it appeared
      that MASTER macro could return other than primary or load balance
      node, which was not supposed to happen, thus do_query() sends queries
      to wrong node (this is not clear from the report but I confirmed it in
      my investigation).

      pool_virtual_master_db_node_id(), which is called in MASTER macro
      returns query_context->virtual_master_node_id if query context
      exists. This could return wrong node if the variable has not been set
      yet. To fix this, the function is modified: if the variable is not
      either load balance node or primary node, the primary node id is
      returned.

    - Fix a posible hang during health checking (Yugo Nagata)

      Helath checking was hang when any data wasn't sent
      from backend after connect(2) succeeded. To fix this,
      pool_check_fd() returns 1 when select(2) exits with
      EINTR due to SIGALRM while health checkking is performed.

      Reported and patch provided by harukat and some modification
      by Yugo.

      See bug 204 for related info.

    - change the Makefile under this directory src/sql/,that is proposed by (Bo Peng)
      [pgpool-hackers: 1611]

    - fix for 0000197: pgpool hangs connections to database.. (Muhammad Usama)

      The client connection was getting stuck when backend node and remote pgpool-II
      node becomes unavailable at the same time. The reason was a missing command
      timeout handling in the function that sends the IPC commands to watchdog.

    - Fix bug with load balance node id info on shmem (Tatsuo Ishii)

      There are few places where the load balance node was mistakenly put on
      wrong place. It should be placed on:
      ConnectionInfo *con_info[child id, connection pool_id, backend id].load_balancing_node].
      In fact it was placed on:
      *con_info[child id, connection pool_id, 0].load_balancing_node].

      As long as the backend id in question is 0, it is ok. However while
      testing pgpool-II 3.6's enhancement regarding failover, if primary
      node is 1 (which is the load balance node) and standby is 0, a client
      connecting to node 1 is disconnected when failover happens on node
      0. This is unexpected and the bug was revealed.

      It seems the bug was there since long time ago but it had not found
      until today by the reason above.

    - Fixing coverity scan reported issues. (Muhammad Usama)

===============================================================================

                        3.4.7 (tataraboshi) 2016/06/17

* Version 3.4.7

    This is a bugfix release against pgpool-II 3.4.6.

    __________________________________________________________________

* New features

    - Allow to access to pgpool while doing health checking (Tatsuo Ishii)

      Currently any attempt to connect to pgpool fails if pgpool is doing
      health check against failed node even if fail_over_on_backend_error is
      off because pgpool child first tries to connect to all backend
      including the failed one and exits if it fails to connect to a backend
      (of course it fails). This is a temporary situation and will be
      resolved before pgpool executes failover. However if the health check
      is retrying, the temporary situation keeps longer depending on the
      setting of health_check_max_retries and health_check_retry_delay. This
      is not good. Attached patch tries to mitigate the problem:

      - When an attempt to connect to backend fails, give up connecting to
      the failed node and skip to other node, rather than exiting the
      process if operating in streaming replication mode and the node is
      not primary node.

      - Mark the local status of the failed node to "down".

      - This will let the primary node be selected as a load balance node
      and every queries will be sent to the primary node. If there's other
      healthy standby nodes, one of them will be chosen as the load
      balance node.

      - After the session is over, the child process will suicide to not
      retain the local status.

      Per [pgpool-hackers: 1531].

* Bug fixes

    - Fix is_set_transaction_serializable() when
      SET default_transaction_isolation TO 'serializable'. (Bo Peng)

      SET default_transaction_isolation TO 'serializable' is sent to
      not only primary but also to standby server in streaming replication mode,
      and this causes an error. Fix is, in streaming replication mode,
      SET default_transaction_isolation TO 'serializable' is sent only to the
      primary server.

      See bug 191 for related info.

    - Fix Chinese documetation bug about raw mode (Yugo Nagata, Bo Peng)
      Connection pool is avalilable in raw mode.

    - Fix confusing comments in pgpool.conf (Tatsuo Ishii)

    - Permit pgpool to support multiple SSL cipher protocols (Muhammad Usama)

      Currently TLSv1_method() is used to initialize the SSL context, that puts an
      unnecessary limitation to allow only TLSv1 protocol for SSL communication.
      While postgreSQL supports other ciphers protocols as well. The commit changes
      the above and initializes the SSLSession using the SSLv23_method()
      (same is also used by PostgreSQL). Because it can negotiate the use of the
      highest mutually supported protocol version and remove the limitation of one
      specific protocol version.

    - If statement timeout is enabled on backend and do_query() sends a (Tatsuo Ishii)
      query to primary node, and all of following user queries are sent to
      standby, it is possible that the next command, for example END, could
      cause a statement timeout error on the primary, and a kind mismatch
      error on pgpool-II is raised.

      This fix tries to mitigate the problem by sending sync message instead
      of flush message in do_query(), expecting that the sync message reset
      the statement timeout timer if we are in an explicit transaction. We
      cannot use this technique for implicit transaction case, because the
      sync message removes the unnamed portal if there's any.

      Plus, pg_stat_statement will no longer show the query issued by
      do_query() as "running".

      See bug 194 for related info.

    - Fix a posible hang during health checking (Yugo Nagata)

      Helath checking was hang when any data wasn't sent
      from backend after connect(2) succeeded. To fix this,
      pool_check_fd() returns 1 when select(2) exits with
      EINTR due to SIGALRM while health checkking is performed.

      Reported and patch provided by harukat and some modification
      by Yugo.

      See bug 204 for related info.

    - change the Makefile under this directory src/sql/,that is proposed by (Bo Peng)
      [pgpool-hackers: 1611]

    - Fix bug with load balance node id info on shmem (Tatsuo Ishii)

      There are few places where the load balance node was mistakenly put on
      wrong place. It should be placed on:
      ConnectionInfo *con_info[child id, connection pool_id, backend id].load_balancing_node].
      In fact it was placed on:
      *con_info[child id, connection pool_id, 0].load_balancing_node].

      As long as the backend id in question is 0, it is ok. However while
      testing pgpool-II 3.6's enhancement regarding failover, if primary
      node is 1 (which is the load balance node) and standby is 0, a client
      connecting to node 1 is disconnected when failover happens on node
      0. This is unexpected and the bug was revealed.

      It seems the bug was there since long time ago but it had not found
      until today by the reason above.

    - Deal with the case when the primary is not node 0 in streaming replication mode. (Tatsuo Ishii)

      http://www.pgpool.net/mantisbt/view.php?id=194#c837 reported that if
      primary is not node 0, then statement timeout could occur even after
      bug194-3.3.diff was applied. After some investigation, it appeared
      that MASTER macro could return other than primary or load balance
      node, which was not supposed to happen, thus do_query() sends queries
      to wrong node (this is not clear from the report but I confirmed it in
      my investigation).

      pool_virtual_master_db_node_id(), which is called in MASTER macro
      returns query_context->virtual_master_node_id if query context
      exists. This could return wrong node if the variable has not been set
      yet. To fix this, the function is modified: if the variable is not
      either load balance node or primary node, the primary node id is
      returned.

===============================================================================

                        3.3.11 (tokakiboshi) 2016/06/17

* Version 3.3.11

    This is a bugfix release against pgpool-II 3.3.10.

    __________________________________________________________________

* New features

    - Allow to access to pgpool while doing health checking (Tatsuo Ishii)

      Currently any attempt to connect to pgpool fails if pgpool is doing
      health check against failed node even if fail_over_on_backend_error is
      off because pgpool child first tries to connect to all backend
      including the failed one and exits if it fails to connect to a backend
      (of course it fails). This is a temporary situation and will be
      resolved before pgpool executes failover. However if the health check
      is retrying, the temporary situation keeps longer depending on the
      setting of health_check_max_retries and health_check_retry_delay. This
      is not good. Attached patch tries to mitigate the problem:

      - When an attempt to connect to backend fails, give up connecting to
      the failed node and skip to other node, rather than exiting the
      process if operating in streaming replication mode and the node is
      not primary node.

      - Mark the local status of the failed node to "down".

      - This will let the primary node be selected as a load balance node
      and every queries will be sent to the primary node. If there's other
      healthy standby nodes, one of them will be chosen as the load
      balance node.

      - After the session is over, the child process will suicide to not
      retain the local status.

      Per [pgpool-hackers: 1531].

* Bug fixes

    - Fix is_set_transaction_serializable() when
      SET default_transaction_isolation TO 'serializable'. (Bo Peng)

      SET default_transaction_isolation TO 'serializable' is sent to
      not only primary but also to standby server in streaming replication mode,
      and this causes an error. Fix is, in streaming replication mode,
      SET default_transaction_isolation TO 'serializable' is sent only to the
      primary server.

      See bug 191 for related info.

    - Fix Chinese documetation bug about raw mode (Yugo Nagata, Bo Peng)
      Connection pool is avalilable in raw mode.

    - Fix confusing comments in pgpool.conf (Tatsuo Ishii)

    - Permit pgpool to support multiple SSL cipher protocols (Muhammad Usama)

      Currently TLSv1_method() is used to initialize the SSL context, that puts an
      unnecessary limitation to allow only TLSv1 protocol for SSL communication.
      While postgreSQL supports other ciphers protocols as well. The commit changes
      the above and initializes the SSLSession using the SSLv23_method()
      (same is also used by PostgreSQL). Because it can negotiate the use of the
      highest mutually supported protocol version and remove the limitation of one
      specific protocol version.

    - If statement timeout is enabled on backend and do_query() sends a (Tatsuo Ishii)
      query to primary node, and all of following user queries are sent to
      standby, it is possible that the next command, for example END, could
      cause a statement timeout error on the primary, and a kind mismatch
      error on pgpool-II is raised.

      This fix tries to mitigate the problem by sending sync message instead
      of flush message in do_query(), expecting that the sync message reset
      the statement timeout timer if we are in an explicit transaction. We
      cannot use this technique for implicit transaction case, because the
      sync message removes the unnamed portal if there's any.

      Plus, pg_stat_statement will no longer show the query issued by
      do_query() as "running".

      See bug 194 for related info.

    - Deal with the case when the primary is not node 0 in streaming replication mode. (Tatsuo Ishii)

      http://www.pgpool.net/mantisbt/view.php?id=194#c837 reported that if
      primary is not node 0, then statement timeout could occur even after
      bug194-3.3.diff was applied. After some investigation, it appeared
      that MASTER macro could return other than primary or load balance
      node, which was not supposed to happen, thus do_query() sends queries
      to wrong node (this is not clear from the report but I confirmed it in
      my investigation).

      pool_virtual_master_db_node_id(), which is called in MASTER macro
      returns query_context->virtual_master_node_id if query context
      exists. This could return wrong node if the variable has not been set
      yet. To fix this, the function is modified: if the variable is not
      either load balance node or primary node, the primary node id is
      returned.

    - change the Makefile under the directory src/sql/, that is proposed (Bo Peng)
      by [pgpool-hackers: 1611]

    - Fix a posible hang during health checking (Yugo Nagata)

      Helath checking was hang when any data wasn't sent
      from backend after connect(2) succeeded. To fix this,
      pool_check_fd() returns 1 when select(2) exits with
      EINTR due to SIGALRM while health checkking is performed.

      Reported and patch provided by harukat and some modification
      by Yugo. Per bug #204.

      backported from 3.4 or later;
      https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commitdiff;h=ed9f2900f1b611f5cfd52e8f758c3616861e60c0

    - Fix bug with load balance node id info on shmem (Tatsuo Ishii)

      There are few places where the load balance node was mistakenly put on
      wrong place. It should be placed on: ConnectionInfo *con_info[child
      id, connection pool_id, backend id].load_balancing_node].  In fact it
      was placed on: *con_info[child id, connection pool_id,
      0].load_balancing_node].

      As long as the backend id in question is 0, it is ok. However while
      testing pgpool-II 3.6's enhancement regarding failover, if primary
      node is 1 (which is the load balance node) and standby is 0, a client
      connecting to node 1 is disconnected when failover happens on node
      0. This is unexpected and the bug was revealed.

      It seems the bug was there since long time ago but it had not found
      until today by the reason above.

===============================================================================

                        3.2.16 (namameboshi) 2016/06/17

* Version 3.2.16

    This is a bugfix release against pgpool-II 3.2.15.

    __________________________________________________________________

* New features

    - Allow to access to pgpool while doing health checking (Tatsuo Ishii)

      Currently any attempt to connect to pgpool fails if pgpool is doing
      health check against failed node even if fail_over_on_backend_error is
      off because pgpool child first tries to connect to all backend
      including the failed one and exits if it fails to connect to a backend
      (of course it fails). This is a temporary situation and will be
      resolved before pgpool executes failover. However if the health check
      is retrying, the temporary situation keeps longer depending on the
      setting of health_check_max_retries and health_check_retry_delay. This
      is not good. Attached patch tries to mitigate the problem:

      - When an attempt to connect to backend fails, give up connecting to
      the failed node and skip to other node, rather than exiting the
      process if operating in streaming replication mode and the node is
      not primary node.

      - Mark the local status of the failed node to "down".

      - This will let the primary node be selected as a load balance node
      and every queries will be sent to the primary node. If there's other
      healthy standby nodes, one of them will be chosen as the load
      balance node.

      - After the session is over, the child process will suicide to not
      retain the local status.

* Bug fixes

    - Fix is_set_transaction_serializable() when
      SET default_transaction_isolation TO 'serializable'. (Bo Peng)

      SET default_transaction_isolation TO 'serializable' is sent to
      not only primary but also to standby server in streaming replication mode,
      and this causes an error. Fix is, in streaming replication mode,
      SET default_transaction_isolation TO 'serializable' is sent only to the
      primary server.

      See bug 191 for related info.

    - Fix Chinese documetation bug about raw mode (Yugo Nagata, Bo Peng)
      Connection pool is avalilable in raw mode.

    - Fix confusing comments in pgpool.conf (Tatsuo Ishii)

    - Permit pgpool to support multiple SSL cipher protocols (Muhammad Usama)

      Currently TLSv1_method() is used to initialize the SSL context, that puts an
      unnecessary limitation to allow only TLSv1 protocol for SSL communication.
      While postgreSQL supports other ciphers protocols as well. The commit changes
      the above and initializes the SSLSession using the SSLv23_method()
      (same is also used by PostgreSQL). Because it can negotiate the use of the
      highest mutually supported protocol version and remove the limitation of one
      specific protocol version.

    - If statement timeout is enabled on backend and do_query() sends a (Tatsuo Ishii)
      query to primary node, and all of following user queries are sent to
      standby, it is possible that the next command, for example END, could
      cause a statement timeout error on the primary, and a kind mismatch
      error on pgpool-II is raised.

      This fix tries to mitigate the problem by sending sync message instead
      of flush message in do_query(), expecting that the sync message reset
      the statement timeout timer if we are in an explicit transaction. We
      cannot use this technique for implicit transaction case, because the
      sync message removes the unnamed portal if there's any.

      Plus, pg_stat_statement will no longer show the query issued by
      do_query() as "running".

      See bug 194 for related info.

    - Deal with the case when the primary is not node 0 in streaming replication mode. (Tatsuo Ishii)

      http://www.pgpool.net/mantisbt/view.php?id=194#c837 reported that if
      primary is not node 0, then statement timeout could occur even after
      bug194-3.3.diff was applied. After some investigation, it appeared
      that MASTER macro could return other than primary or load balance
      node, which was not supposed to happen, thus do_query() sends queries
      to wrong node (this is not clear from the report but I confirmed it in
      my investigation).

      pool_virtual_master_db_node_id(), which is called in MASTER macro
      returns query_context->virtual_master_node_id if query context
      exists. This could return wrong node if the variable has not been set
      yet. To fix this, the function is modified: if the variable is not
      either load balance node or primary node, the primary node id is
      returned.

    - change the Makefile under the directory src/sql/, that is proposed (Bo Peng)
      by [pgpool-hackers: 1611]

    - Fix a posible hang during health checking (Yugo Nagata)

      Helath checking was hang when any data wasn't sent
      from backend after connect(2) succeeded. To fix this,
      pool_check_fd() returns 1 when select(2) exits with
      EINTR due to SIGALRM while health checkking is performed.

      Reported and patch provided by harukat and some modification
      by Yugo. Per bug #204.

      backported from 3.4 or later;
      https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commitdiff;h=ed9f2900f1b611f5cfd52e8f758c3616861e60c0

    - Fix bug with load balance node id info on shmem (Tatsuo Ishii)

      There are few places where the load balance node was mistakenly put on
      wrong place. It should be placed on: ConnectionInfo *con_info[child
      id, connection pool_id, backend id].load_balancing_node].  In fact it
      was placed on: *con_info[child id, connection pool_id,
      0].load_balancing_node].

      As long as the backend id in question is 0, it is ok. However while
      testing pgpool-II 3.6's enhancement regarding failover, if primary
      node is 1 (which is the load balance node) and standby is 0, a client
      connecting to node 1 is disconnected when failover happens on node
      0. This is unexpected and the bug was revealed.

      It seems the bug was there since long time ago but it had not found
      until today by the reason above.

===============================================================================

                        3.1.19 (hatsuiboshi) 2016/06/17

* Version 3.1.19

    This is a bugfix release against pgpool-II 3.1.18.

    __________________________________________________________________

* Bug fixes

    - Fix is_set_transaction_serializable() when
      SET default_transaction_isolation TO 'serializable'. (Bo Peng)

      SET default_transaction_isolation TO 'serializable' is sent to
      not only primary but also to standby server in streaming replication mode,
      and this causes an error. Fix is, in streaming replication mode,
      SET default_transaction_isolation TO 'serializable' is sent only to the
      primary server.

      See bug 191 for related info.

    - Fix Chinese documetation bug about raw mode (Yugo Nagata, Bo Peng)
      Connection pool is avalilable in raw mode.

    - Fix confusing comments in pgpool.conf (Tatsuo Ishii)

    - Permit pgpool to support multiple SSL cipher protocols (Muhammad Usama)

      Currently TLSv1_method() is used to initialize the SSL context, that puts an
      unnecessary limitation to allow only TLSv1 protocol for SSL communication.
      While postgreSQL supports other ciphers protocols as well. The commit changes
      the above and initializes the SSLSession using the SSLv23_method()
      (same is also used by PostgreSQL). Because it can negotiate the use of the
      highest mutually supported protocol version and remove the limitation of one
      specific protocol version.

    - If statement timeout is enabled on backend and do_query() sends a (Tatsuo Ishii)
      query to primary node, and all of following user queries are sent to
      standby, it is possible that the next command, for example END, could
      cause a statement timeout error on the primary, and a kind mismatch
      error on pgpool-II is raised.

      This fix tries to mitigate the problem by sending sync message instead
      of flush message in do_query(), expecting that the sync message reset
      the statement timeout timer if we are in an explicit transaction. We
      cannot use this technique for implicit transaction case, because the
      sync message removes the unnamed portal if there's any.

      Plus, pg_stat_statement will no longer show the query issued by
      do_query() as "running".

      See bug 194 for related info.

    - Deal with the case when the primary is not node 0 in streaming replication mode. (Tatsuo Ishii)

      http://www.pgpool.net/mantisbt/view.php?id=194#c837 reported that if
      primary is not node 0, then statement timeout could occur even after
      bug194-3.3.diff was applied. After some investigation, it appeared
      that MASTER macro could return other than primary or load balance
      node, which was not supposed to happen, thus do_query() sends queries
      to wrong node (this is not clear from the report but I confirmed it in
      my investigation).

      pool_virtual_master_db_node_id(), which is called in MASTER macro
      returns query_context->virtual_master_node_id if query context
      exists. This could return wrong node if the variable has not been set
      yet. To fix this, the function is modified: if the variable is not
      either load balance node or primary node, the primary node id is
      returned.

    - change the Makefile under the directory src/sql/, that is proposed (Bo Peng)
      by [pgpool-hackers: 1611]

    - Fix a posible hang during health checking (Yugo Nagata)

      Helath checking was hang when any data wasn't sent
      from backend after connect(2) succeeded. To fix this,
      pool_check_fd() returns 1 when select(2) exits with
      EINTR due to SIGALRM while health checkking is performed.

      Reported and patch provided by harukat and some modification
      by Yugo. Per bug #204.

      backported from 3.4 or later;
      https://git.postgresql.org/gitweb/?p=pgpool2.git;a=commitdiff;h=ed9f2900f1b611f5cfd52e8f758c3616861e60c0

    - Fix bug with load balance node id info on shmem (Tatsuo Ishii)

      There are few places where the load balance node was mistakenly put on
      wrong place. It should be placed on: ConnectionInfo *con_info[child
      id, connection pool_id, backend id].load_balancing_node].  In fact it
      was placed on: *con_info[child id, connection pool_id,
      0].load_balancing_node].

      As long as the backend id in question is 0, it is ok. However while
      testing pgpool-II 3.6's enhancement regarding failover, if primary
      node is 1 (which is the load balance node) and standby is 0, a client
      connecting to node 1 is disconnected when failover happens on node
      0. This is unexpected and the bug was revealed.

      It seems the bug was there since long time ago but it had not found
      until today by the reason above.

===============================================================================

-- 
Bo Peng <pengbo <at> sraoss.co.jp>
SRA OSS, Inc. Japan

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Dave Page | 10 Jun 16:48 2016
Gravatar

pgAdmin 4 v1.0 Beta 1 Released

I'm pleased to announce that the release of pgAdmin 4 v1.0 Beta 1 for
testing. You can find more details on the website:

Announcement: https://www.pgadmin.org/

Documentation: https://www.pgadmin.org/docs4/dev/index.html

Downloads: https://www.pgadmin.org/download/

Bug tracker: https://redmine.postgresql.org/projects/pgadmin4/issues
(requires a PostgreSQL community login)

pgAdmin 4 is a complete rewrite of pgAdmin, written in
Python/Javascript, and deployable as a desktop or web application.
There's a much more modern look and feel, improved UI and workflows,
and more flexibility and reliability than pgAdmin 3. There are a
number of screenshots on the website announcement to give an idea of
what it looks like.

Please download, test, and report any (non-duplicate - see the tracker
above) issues so we can get ready for the final release with
PostgreSQL 9.6. The beta 2 release will coincide with PostgreSQL 9.6
beta 2 and will stay in sync from there on.

Builds are available for Windows and Mac which include the desktop
runtime, an early PIP wheel which just includes the web code, and a
source tarball. RPM and DEB packages are still in development. For
bleeding edge code, please see the GIT repository at:

https://git.postgresql.org/gitweb/?p=pgadmin4.git;a=summary

Finally, I must thank everyone involved in getting us this far - it's
taken a team of more than 15 people at EDB something like 10,000 hours
of effort in total, plus those contributors from the community who
weren't held to strict project plans but did play a vital role
regardless! For more info on the team and features, please see my blog
post at http://pgsnake.blogspot.co.uk/2016/04/pgadmin-4-elephant-nears-finish-line.html

Thanks, Dave

-- 
Dave Page
Blog: http://pgsnake.blogspot.com
Twitter:  <at> pgsnake

EnterpriseDB UK: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Steve Singer | 3 Jun 03:37 2016

Slony 2.2.5 released

The Slony team is pleased to announce Slony 2.2.5 the next minor release 
of the Slony 2.2.x series

Slony 2.2.5 includes the following changes

   - PG 9.5 makefile fix for win32
   - PG 9.6 header file fix
   - Bug 359 :: Additional parameter to GetConfigOptionByName() in HEAD
   - Remove unsupported warning for PG 9.5

Slony 2.2.5 can be downloaded from the following URL

http://www.slony.info/downloads/2.2/source/slony1-2.2.5.tar.bz2

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Andreas Seltenreich | 2 Jun 15:55 2016
Picon

SQLsmith 1.0 is released

SQLsmith is a random SQL query generator for PostgreSQL.  It is inspired
by Csmith, which generates random C code.

Use cases are quality assurance through fuzz testing and benchmarking.
Besides PostgreSQL developers, users developing extensions might also be
interested in exposing their code to SQLsmith's random workload.

During its development, it already found about thirty bugs in
PostgresSQL alphas, betas and releases, including security
vulnerabilities in released versions.  There is a score list maintained
by its users in a wiki:

  https://github.com/anse1/sqlsmith/wiki#score-list

Version 1.0 supports generating queries for PostgreSQL 9.5 or later
only.  SQLsmith was designed with testing different versions and even
products in mind, but this has not manifested yet for the first release.

SQLsmith is available under GPLv3 at

  https://github.com/anse1/sqlsmith/releases/latest

Packages for Debian/Ubuntu are available via apt.postgresql.org.

In case you need consulting for testing your PostgreSQL-based product
with SQLsmith, or simply want to speed up SQLsmith's developement,
contracted work is available via my employer at

  https://www.credativ.de/
  mailto:info <at> credativ.de

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Kouhei Sutou | 2 Jun 05:33 2016
Gravatar

[ANN] PGroonga 1.0.9 - Make PostgreSQL fast full text search platform for all languages

Hi,

PGroonga 1.0.9 has been released!

  http://groonga.org/en/blog/2016/06/02/pgroonga-1.0.9.html

### About PGroonga

http://pgroonga.github.io/

PGroonga is a PostgreSQL extension that makes PostgreSQL
fast full text search platform for all languages!
It's released under PostgreSQL license.

There are some PostgreSQL extensions that improves full text
search feature of PostgreSQL such as pg_trgm(*1).

(*1) http://www.postgresql.org/docs/current/static/pgtrgm.html

pg_trgm doesn't support languages that use non-alphanumerics
characters such as Japanese and Chinese.

PGroonga supports all languages, provides rich full text
search related features and is very fast. Because PGroonga
uses Groonga(*2) that is a full-fledged full text search
engine as backend.

(*2) http://groonga.org/

PGroonga also supports JSON search. You can use each value
for condition. You can also perform full text search against
all texts in JSON. No other extension such as JsQuery(*3)
doesn't provide full text search feature against JSON.

(*3) https://github.com/postgrespro/jsquery

### Changes

Here are changes since 1.0.6:

  * Supported PostgreSQL 9.6 beta1.

  * Supported Ubuntu Xenial Xerus (16.04 LTS).

  * Added pgroonga.highlight_html function that returns
    search keyword highlighted HTML.
    http://pgroonga.github.io/reference/functions/pgroonga-highlight-html.html

  * Added pgroonga.match_positions_byte function that
    returns locations of keywords in text.
    http://pgroonga.github.io/reference/functions/pgroonga-match-positions-byte.html

  * Added pgroonga.query_extract_keywords function that
    extract keywords from query.
    http://pgroonga.github.io/reference/functions/pgroonga-query-extract-keywords.html

  * Added &^> operator that performs prefix search against
    text[] type value. If any element is matched, the value
    is matched.
    http://pgroonga.github.io/reference/operators/prefix-search-contain-v2.html

  * Added &^~> operator that performs prefix RK search
    against text[] type value. If any element is matched,
    the value is matched.
    http://pgroonga.github.io/reference/operators/prefix-rk-search-contain-v2.html

### Usage

You can use PGroonga without full text search knowledge. You
just create an index and puts a condition into WHERE:

  CREATE INDEX index_name ON table USING pgroonga (column);

  SELECT * FROM table WHERE column  <at>  <at>  'PostgreSQL';

You can also use LIKE to use PGroonga. PGroonga provides a
feature that performs LIKE with index. LIKE with PGroonga
index is faster than LIKE without index. It means that you
can improve performance without changing your application
that uses the following SQL:

  SELECT * FROM table WHERE column LIKE '%PostgreSQL%';

Are you interested in PGroonga? Please install(*4) and try
tutorial(*5). You can know all PGroonga features.

(*4) http://pgroonga.github.io/install/
(*5) http://pgroonga.github.io/tutorial/

You can install PGroonga easily. Because PGroonga provides
packages for major platforms. There are binaries for
Windows.

Thanks,
--
kou

--

-- 
Sent via pgsql-announce mailing list (pgsql-announce <at> postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-announce

Jan Wieck | 1 Jun 05:04 2016
Gravatar

BenchmarkSQL 5.0 available

BenchmarkSQL 5.0 is available for download.

BenchmarkSQL is an open source implementation of the popular TPC/C OLTP
database benchmark. Version 5.0 is a major overhaul of the benchmark driver.
This version supports Firebird, Oracle and PostgreSQL, adds foreign keys to
the schema (as required by the specifications) and captures detailed benchmark
results in CSV files, that can later be turned into an HTML report.

--------------------



--------------------

Regards, the BenchmarkSQL team

--
Jan Wieck
Senior Postgres Architect

Gmane