Josh Berkus | 26 May 22:08 2015

Advisory on May 22 Update Release

On May 22, 2015, the PostgreSQL Global Development Group released an
update to all supported versions of PostgreSQL, including versions
9.4.2, 9.3.7, 9.2.11, 9.1.16, and 9.0.20.  This update has a bug in
fsync logic which will cause some users' PostgreSQL servers to refuse to
restart after a crash, or after a restore from binary backup.  Please
read our FAQ on this issue
( before
applying the update to your systems.

The PostgreSQL Project regrets the error.  Another PostgreSQL update,
with this issue corrected, will be released soon.


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

CK Tan | 22 May 00:57 2015

Vitesse X 9.4.1 for released

Vitesse Data announces the release of Vitesse X 9.4.1 for Postgres. This is a Postgres extension that uses dynamic query compilation to speed up query executions. 

Please go to for download instructions. 

Please direct all questions to cktan <at> .

Thank you.

David Fetter | 25 May 01:08 2015

== PostgreSQL Weekly News - May 24 2015 ==

== PostgreSQL Weekly News - May 24 2015 ==

PostgreSQL 9.4.2, 9.3.7, 9.2.11, 9.1.16, and 9.0.20 security and bug fix releases
are out.  Upgrade ASAP.

== PostgreSQL Product News ==

MJSQLView Version 7.02, a Java-based UI which supports PostgreSQL, released.

== PostgreSQL Jobs for May ==

== PostgreSQL Local ==

PGDay in Belfort, France will be held June 2, 2015.

PGCon 2015 is June 16-20 in Ottawa, Canada.

The second Swiss Postgres Conference will be held June 25-26, 2015 at
HSR Rapperswil.

PGDay UK, Conference will be taking place on 7th July 2015 – it is aimed at
the UK PostgreSQL Community.  The CfP is open until 13 April 2015.

PGDay Campinas 2015 will take place in Campinas on August 7.
The CfP is open through May 31.

The Call For Papers for PostgresOpen 2015, being held in Dallas, Texas
from September 16th to 18th, is now open.

The CfP for PostgreSQL Session #7, September 24th, 2015 in Paris,
France, is open until June 15, 2015.  call-for-paper <AT>
postgresql-sessions <DOT> org.

PostgreSQL Conference Europe 2015 will be held on October 27-30 in the
Vienna Marriott Hotel, in Vienna, Austria.  The CfP is open until
August 7.

PGConf Silicon Valley 2015 is November 17-18 at the South San
Francisco Convention Center.  The CfP is open through June 15.

== PostgreSQL in the News ==

Planet PostgreSQL:

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david <at>, German language
to pwn <at>, Italian language to pwn <at>  Spanish language
to pwn <at>

== Applied Patches ==

Tom Lane pushed:

- Use += not = to set makefile variables after including base
  makefiles.  The previous coding in hstore_plpython and
  ltree_plpython wiped out any values set by the base makefiles.  This
  at least had the effect of running the tests in "regression" not
  "contrib_regression" as expected.  These being pretty new modules,
  there might be other bad effects we'd not noticed yet.

- Fix failure to copy IndexScan.indexorderbyops in copyfuncs.c.  This
  oversight results in a crash at executor startup if the plan has
  been copied.  outfuncs.c was missed as well.  While we could
  probably have taught both those files to cope with the originally
  chosen representation of an Oid array, it would have been painful,
  not least because there'd be no easy way to verify the array length.
  An Oid List is far easier to work with.  And AFAICS, there is no
  particular notational benefit to using an array rather than a list
  in the existing parts of the patch either.  So just change it to a
  list.  Error in commit 35fcb1b3d038a501f3f4c87c05630095abaaadab,
  which is new, so no need for back-patch.

- Recognize "REGRESS_OPTS += ..." syntax in MSVC build scripts.
  Necessitated by commit b14cf229f4bd7238be2e31d873dc5dd241d3871e.
  Per buildfarm.

- Put back a backwards-compatible version of sampling support
  functions.  Commit 83e176ec18d2a91dbea1d0d1bd94c38dc47cd77c removed
  the longstanding support functions for block sampling without any
  consideration of the impact this would have on third-party FDWs.
  The new API is not notably more functional for FDWs than the old, so
  forcing them to change doesn't seem like a good thing.  We can
  provide the old API as a wrapper (more or less) around the new one
  for a minimal amount of extra code.

- Revert "Change pg_seclabel.provider and pg_shseclabel.provider to
  type "name"." This reverts commit
  b82a7be603f1811a0a707b53c62de6d5d9431740.  There is a better (less
  invasive) way to fix it, which I will commit next.

- Avoid collation dependence in indexes of system catalogs.  No index
  in template0 should have collation-dependent ordering, especially
  not indexes on shared catalogs.  For most textual columns we avoid
  this issue by using type "name" (which sorts per strcmp()).  However
  there are a few indexed columns that we'd prefer to use "text" for,
  and for that, the default opclass text_ops is unsafe.  Fortunately,
  text_pattern_ops is safe (it sorts per memcmp()), and it has no real
  functional disadvantage for our purposes.  So change the indexes on
  pg_seclabel.provider and pg_shseclabel.provider to use
  text_pattern_ops.  In passing, also mark
  pg_replication_origin.roname as using text_pattern_ops --- for some
  reason it was labeled varchar_pattern_ops which is just wrong, even
  though it accidentally worked.  Add regression test queries to catch
  future errors of these kinds.  We still can't do anything about the
  misdeclared pg_seclabel and pg_shseclabel indexes in back branches

- Change pg_seclabel.provider and pg_shseclabel.provider to type
  "name".  These were "text", but that's a bad idea because it has
  collation-dependent ordering.  No index in template0 should have
  collation-dependent ordering, especially not indexes on shared
  catalogs.  There was general agreement that provider names don't
  need to be longer than other identifiers, so we can fix this at a
  small waste of table space by changing from text to name.  There's
  no way to fix the problem in the back branches, but we can hope that
  security labels don't yet have widespread-enough usage to make it
  urgent to fix.  There needs to be a regression sanity test to
  prevent us from making this same mistake again; but before putting
  that in, we'll need to get rid of similar brain fade in the
  recently-added pg_replication_origin catalog.  Note: for lack of a
  suitable testing environment, I've not really exercised this change.
  I trust the buildfarm will show up any mistakes.

- Another typo fix.  In the spirit of the season.

- Improve packing/alignment annotation for ItemPointerData.  We want
  this struct to be exactly a series of 3 int16 words, no more and no
  less.  Historically, at least, some ARM compilers preferred to pad
  it to 8 bytes unless coerced.  Our old way of doing that was just to
  use __attribute__((packed)), but as pointed out by Piotr Stefaniak,
  that does too much: it also licenses the compiler to give the struct
  only byte-alignment.  We don't want that because it adds access
  overhead, possibly quite significant overhead.  According to the GCC
  manual, what we want requires also specifying
  __attribute__((align(2))).  It's not entirely clear if all the
  relevant compilers accept this pragma as well, but we can hope the
  buildfarm will tell us if not.  We can also add a static assertion
  that should fire if the compiler padded the struct.  Since the
  combination of these pragmas should define exactly what we want on
  any compiler that accepts them, let's try using them wherever we
  think they exist, not only for __arm__.  (This is likely to expose
  that the conditional definitions in c.h are inadequate, but finding
  that out would be a good thing.) The immediate motivation for this
  is that the current definition of ExecRowMark allows its curCtid
  field to be misaligned.  It is not clear whether there are any other
  uses of ItemPointerData with a similar hazard.  We could change the
  definition of ExecRowMark if this doesn't work, but it would be far
  better to have a future-proof fix.  Piotr Stefaniak, some further
  hacking by me

- More fixes for lossy-GiST-distance-functions patch.  Paul Ramsey
  reported that commit 35fcb1b3d038a501f3f4c87c05630095abaaadab
  induced a core dump on commuted ORDER BY expressions, because it was
  assuming that the indexorderby expression could be found verbatim in
  the relevant equivalence class, but it wasn't there.  We really
  don't need anything that complicated anyway; for the data types
  likely to be used for index ORDER BY operators in the foreseeable
  future, the exprType() of the ORDER BY expression will serve fine.
  (The case where we'd have to work harder is where the ORDER BY
  expression's result is only binary-compatible with the declared
  input type of the ordering operator; long before worrying about
  that, one would need to get rid of GiST's hard-wired assumption that
  said datatype is float8.) Aside from fixing that crash and adding a
  regression test for the case, I did some desultory code review:
  nodeIndexscan.c was likewise overthinking how hard it ought to work
  to identify the datatype of the ORDER BY expressions.  Add comments
  explaining how come nodeIndexscan.c can get away with simplifying
  assumptions about NULLS LAST ordering and no backward scan.  Revert
  no-longer-needed changes of find_ec_member_for_tle(); while the new
  definition was no worse than the old, it wasn't better either, and
  it might cause back-patching pain.  Revert entirely bogus additions
  to genam.h.

- Fix recently-introduced crash in array_contain_compare().  Silly
  oversight in commit 1dc5ebc9077ab742079ce5dac9a6664248d42916: when
  array2 is an expanded array, it might have array2->xpn.dnulls equal
  to NULL, indicating the array is known null-free.  The code wasn't
  expecting that, because it formerly always used deconstruct_array()
  which always delivers a nulls array.  Per bug #13334 from Regina

- Last-minute updates for release notes.  Add entries for security
  issues.  Security: CVE-2015-3165 through CVE-2015-3167

- Last-minute updates for release notes.  Revise description of
  CVE-2015-3166, in line with scaled-back patch.  Change release date.
  Security: CVE-2015-3166

- Revert error-throwing wrappers for the printf family of functions.
  This reverts commit 16304a013432931e61e623c8d85e9fe24709d9ba, except
  for its changes in src/port/snprintf.c; as well as commit
  cac18a76bb6b08f1ecc2a85e46c9d2ab82dd9d23 which is no longer needed.
  Fujii Masao reported that the previous commit caused failures in
  psql on OS X, since if one exits the pager program early while
  viewing a query result, psql sees an EPIPE error from fprintf ---
  and the wrapper function thought that was reason to panic.  (It's a
  bit surprising that the same does not happen on Linux.)  Further
  discussion among the security list concluded that the risk of other
  such failures was far too great, and that the one-size-fits-all
  approach to error handling embodied in the previous patch is
  unlikely to be workable.  This leaves us again exposed to the
  possibility of the type of failure envisioned in CVE-2015-3166.
  However, that failure mode is strictly hypothetical at this point:
  there is no concrete reason to believe that an attacker could
  trigger information disclosure through the supposed mechanism.  In
  the first place, the attack surface is fairly limited, since so much
  of what the backend does with format strings goes through
  stringinfo.c or psprintf(), and those already had adequate defenses.
  In the second place, even granting that an unprivileged attacker
  could control the occurrence of ENOMEM with some precision, it's a
  stretch to believe that he could induce it just where the target
  buffer contains some valuable information.  So we concluded that the
  risk of non-hypothetical problems induced by the patch greatly
  outweighs the security risks.  We will therefore revert, and instead
  undertake closer analysis to identify specific calls that may need
  hardening, rather than attempt a universal solution.  We have kept
  the portion of the previous patch that improved snprintf.c's
  handling of errors when it calls the platform's sprintf().  That
  seems to be an unalloyed improvement.  Security: CVE-2015-3166

- Still more fixes for lossy-GiST-distance-functions patch.  Fix
  confusion in documentation, substantial memory leakage if float8 or
  float4 are pass-by-reference, and assorted comments that were
  obsoleted by commit 98edd617f3b62a02cb2df9b418fcc4ece45c7ec0.

- Fix incorrect snprintf() limit.  Typo in commit 7cbee7c0a.  No
  practical effect since the buffer should never actually be overrun,
  but various compilers and static analyzers will whine about it.
  Petr Jelinek

- Add error check for lossy distance functions in index-only scans.
  Maybe we should actually support this, but for the moment let's just
  throw an error if the opclass tries it.

- Remove no-longer-required function declarations.  Remove a bunch of
  "extern Datum foo(PG_FUNCTION_ARGS);" declarations that are no
  longer needed now that PG_FUNCTION_INFO_V1(foo) provides that.  Some
  of these were evidently missed in commit e7128e8dbb305059, but
  others were cargo-culted in in code added since then.  Possibly that
  can be blamed in part on the fact that we'd not fixed relevant
  documentation examples, which I've now done.

- Add a bit more commentary about regex's colormap tree data
  structure.  Per an off-list question from Piotr Stefaniak.

- Rename pg_shdepend.c's typedef "objectType" to
  SharedDependencyObjectType.  The name objectType is widely used as a
  field name, and it's pure luck that this conflict has not caused
  pgindent to go crazy before.  It messed up pg_audit.c pretty good
  though.  Since pg_shdepend.c doesn't export this typedef and only
  uses it in three places, changing that seems saner than changing the
  field usages.  Back-patch because we're contemplating using the
  union of all branch typedefs for future pgindent runs, so this won't
  fix anything if it stays the same in back branches.

- Manual cleanup of pgindent results.  Fix some places where pgindent
  did silly stuff, often because project style wasn't followed to
  begin with.  (I've not touched the atomics headers, though.)

Peter Eisentraut pushed:

- Add new files to

  plain C string language name needs to be wrapped in makeString() so
  that the parse tree is copyable.  This is detectable by
  -DCOPY_PARSE_PLAN_TREES.  Add a test case for the COMMENT case.
  Also make the quoting in the error messages more consistent.
  discovered by Tom Lane

- Message string improvements

Fujii Masao pushed:

- Don't classify REINDEX command as DDL in the pg_audit doc.  The
  commit a936743 changed the class of REINDEX but forgot to update the

- Correct the names of pgstattuple_approx output columns in the doc.

- Make recovery_target_action = pause work.  Previously even if
  recovery_target_action was set to pause and the recovery target was
  reached, the recovery could never be paused.  Because the setting of
  pause was *always* overridden with that of shutdown unexpectedly.
  This override is valid and intentional if hot_standby is not enabled
  because there is no way to resume the paused recovery in this case
  and the setting of pause is completely useless. But not if
  hot_standby is enabled.  This patch changes the code so that the
  setting of pause is overridden with that of shutdown only when
  hot_standby is not enabled.  Bug reported by Andres Freund

- Minor enhancement of readability of ALTER TABLE syntax in the doc.
  Fabrízio Mello

Heikki Linnakangas pushed:

- Put back stats-collector restarting code, removed accidentally.
  Removed that code snippet accidentally in the archive_mode='always'
  patch.  Also, use varname-tags for archive_command in the docs.
  Fujii Masao

- Fix typo in comment.  Jim Nasby

- Fix off-by-one error in Assertion.  The point of the assertion is to
  ensure that the arrays allocated in stack are large enough, but the
  check was one item short.  This won't matter in practice because
  MaxIndexTuplesPerPage is an overestimate, so you can't have that
  many items on a page in reality.  But let's be tidy.  Spotted by
  Anastasia Lubennikova. Backpatch to all supported versions, like the
  patch that added the assertion.

- Collection of typo fixes.  Use "a" and "an" correctly, mostly in
  comments. Two error messages were also fixed (they were just elogs,
  so no translation work required). Two function comments in pg_proc.h
  were also fixed. Etsuro Fujita reported one of these, but I found a
  lot more with grep.  Also fix a few other typos spotted while
  grepping for the a/an typos.  For example, "consists out of ..." ->
  "consists of ...". Plus a "though"/ "through" mixup reported by
  Euler Taveira.  Many of these typos were in old code, which would be
  nice to backpatch to make future backpatching easier. But much of
  the code was new, and I didn't feel like crafting separate patches
  for each branch. So no backpatching.

- Fix more typos in comments.  Patch by CharSyam, plus a few more I
  spotted with grep.

- At promotion, don't leave behind a partial segment on the old
  timeline.  With commit de768844, a copy of the partial segment was
  archived with the .partial suffix, but the original file was still
  left in pg_xlog, so it didn't actually solve the problems with
  archiving the partial segment that it was supposed to solve. With
  this patch, the partial segment is renamed rather than copied, so we
  only archive it with the .partial suffix.  Also be more robust in
  detecting if the last segment is already being archived. Previously
  I used XLogArchiveIsBusy() for that, but that's not quite right.
  With archive_mode='always', there might be a .ready file for it, and
  we don't want to rename it to .partial in that case.  The old
  segment is needed until we're fully committed to the new timeline,
  i.e. until we've written the end-of-recovery WAL record and updated
  the min recovery point and timeline in the control file. So move the
  renaming later in the startup sequence, after all that's been done.

Noah Misch pushed:

- Permit use of vsprintf() in PostgreSQL code.  The next commit needs
  it.  Back-patch to 9.0 (all supported versions).

- Check return values of sensitive system library calls.  PostgreSQL
  already checked the vast majority of these, missing this handful
  that nearly cannot fail.  If putenv() failed with ENOMEM in
  pg_GSS_recvauth(), authentication would proceed with the wrong
  keytab file.  If strftime() returned zero in cache_locale_time(),
  using the unspecified buffer contents could lead to information
  exposure or a crash.  Back-patch to 9.0 (all supported versions).
  Other unchecked calls to these functions, especially those in
  frontend code, pose negligible security concern.  This patch does
  not address them.  Nonetheless, it is always better to check return
  values whose specification provides for indicating an error.  In
  passing, fix an off-by-one error in strftime_win32()'s invocation of
  WideCharToMultiByte().  Upon retrieving a value of exactly
  MAX_L10N_DATA bytes, strftime_win32() would overrun the caller's
  buffer by one byte.  MAX_L10N_DATA is chosen to exceed the length of
  every possible value, so the vulnerable scenario probably does not
  arise.  Security: CVE-2015-3166

- Prevent a double free by not reentering be_tls_close().  Reentering
  this function with the right timing caused a double free, typically
  crashing the backend.  By synchronizing a disconnection with the
  authentication timeout, an unauthenticated attacker could achieve
  this somewhat consistently.  Call be_tls_close() solely from within
  proc_exit_prepare().  Back-patch to 9.0 (all supported versions).
  Benkocs Norbert Attila Security: CVE-2015-3165

- pgcrypto: Report errant decryption as "Wrong key or corrupt data".
  This has been the predominant outcome.  When the output of
  decrypting with a wrong key coincidentally resembled an OpenPGP
  packet header, pgcrypto could instead report "Corrupt data", "Not
  text data" or "Unsupported compression algorithm".  The distinct
  "Corrupt data" message added no value.  The latter two error
  messages misled when the decrypted payload also exhibited
  fundamental integrity problems.  Worse, error message variance in
  other systems has enabled cryptologic attacks; see RFC 4880 section
  "14. Security Considerations".  Whether these pgcrypto behaviors are
  likewise exploitable is unknown.  In passing, document that pgcrypto
  does not resist side-channel attacks.  Back-patch to 9.0 (all
  supported versions).  Security: CVE-2015-3167

- Add error-throwing wrappers for the printf family of functions.  All
  known standard library implementations of these functions can fail
  with ENOMEM.  A caller neglecting to check for failure would
  experience missing output, information exposure, or a crash.  Check
  return values within wrappers and code, currently just snprintf.c,
  that bypasses the wrappers.  The wrappers do not return after an
  error, so their callers need not check.  Back-patch to 9.0 (all
  supported versions).  Popular free software standard library
  implementations do take pains to bypass malloc() in simple cases,
  but they risk ENOMEM for floating point numbers, positional
  arguments, large field widths, and large precisions.  No
  specification demands such caution, so this commit regards every
  call to a printf family function as a potential threat.  Injecting
  the wrappers implicitly is a compromise between patch scope and
  design goals.  I would prefer to edit each call site to name a
  wrapper explicitly.  libpq and the ECPG libraries would, ideally,
  convey errors to the caller rather than abort().  All that would be
  painfully invasive for a back-patched security fix, hence this
  compromise.  Security: CVE-2015-3166

Robert Haas pushed:

- Fix error message in pre_sync_fname.  The old one didn't include %m
  anywhere, and required extra translation.  Report by Peter
  Eisentraut. Fix by me. Review by Tom Lane.

- Correct two mistakes in the ALTER FOREIGN TABLE reference page.
  Etsuro Fujita

Andres Freund pushed:

- Attach ON CONFLICT SET ... WHERE to the correct planstate.  The
  previous coding was a leftover from attempting to hang all the on
  conflict logic onto modify table's child nodes. It appears to not
  have actually caused problems except for explain.  Add test
  exercising the broken and some other code paths.  Author: Peter
  Geoghegan and Andres Freund

- Various fixes around ON CONFLICT for rule deparsing.  Neither the
  deparsing of the new alias for INSERT's target table, nor of the
  inference clause was supported. Also fixup a typo in an error
  message.  Add regression tests to test those code paths.  Author:
  Peter Geoghegan

- Refactor ON CONFLICT index inference parse tree representation.
  Defer lookup of opfamily and input type of a of a user specified
  opclass until the optimizer selects among available unique indexes;
  and store the opclass in the parse analyzed tree instead.  The
  primary reason for doing this is that for rule deparsing it's easier
  to use the opclass than the previous representation.  While at it
  also rename a variable in the inference code to better fit it's
  purpose.  This is separate from the actual fixes for deparsing to
  make review easier.

- Remove the new UPSERT command tag and use INSERT instead.
  Previously, INSERT with ON CONFLICT DO UPDATE specified used a new
  command tag -- UPSERT.  It was introduced out of concern that INSERT
  as a command tag would be a misrepresentation for ON CONFLICT DO
  UPDATE, as some affected rows may actually have been updated.
  Alvaro Herrera noticed that the implementation of that new command
  tag was incomplete; in subsequent discussion we concluded that
  having it doesn't provide benefits that are in line with the
  compatibility breaks it requires.  Catversion bump due to the
  removal of PlannedStmt->isUpsert.  Author: Peter Geoghegan
  Discussion: 20150520215816.GI5885 <at>

- Fix yet another bug in ON CONFLICT rule deparsing.  Expand testing
  of rule deparsing a good bit, it's evidently needed.  Author: Peter
  Geoghegan, Andres Freund Discussion:

Simon Riggs pushed:

- Fix spelling in comment

Andrew Dunstan pushed:

- Unpack jbvBinary objects passed to pushJsonbValue pushJsonbValue was
  accepting jbvBinary objects passed as WJB_ELEM or WJB_VALUE data.
  While this succeeded, when those objects were later encountered in
  attempting to convert the result to Jsonb, errors occurred. With
  this change we ghuarantee that a JSonbValue constructed from calls
  to pushJsonbValue does not contain any jbvBinary objects.  This
  cures a problem observed with jsonb_delete.  This means callers of
  pushJsonbValue no longer need to perform this unpacking themselves.
  A subsequent patch will perform some cleanup in that area.  The
  error was not triggered by any 9.4 code, but this is a publicly
  visible routine, and so the error could be exercised by third party
  code, therefore backpatch to 9.4.  Bug report from Peter Geoghegan,
  fix by me.

Bruce Momjian pushed:

- Improve pgindent instructions regarding Perl backup files

- Update typedef file in preparation for pgindent run

- pgindent run for 9.5

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

Álvaro Herrera sent in another revision of a patch to fix some
infelicities with MultiXact in recovery.

Uriy Zhuravlev sent in a patch to implement ALTER OPERATOR.

John Gorman sent in a patch to preserve errno across errmsg() calls.

Joshua Drake sent in a doc patch for backups.

Jim Nasby sent in another revision of a patch to add an optional
parameter to the pg_*_backend functions which allows skipping the
backend making the call.

Jeff Janes sent in a patch to fix max WAL size in recovery.

Alexander Shulgin sent in a patch to generalize the JSON-producing
functions in utils/adt/json.c and to provide a set of callbacks which
can be overridden the same way that is already provided for parsing

Andrew Gierth sent in a patch to change the type of GROUPING() from
int4 to int8.

Robert Haas sent in another revision of a patch to allow assessing
parallel safety.

Alexander Shulgin sent in another revision of a patch to add a
--strict-include parameter to pg_dump.

Andrew Dunstan sent in a patch to rename jsonb_replace to jsonb_set
with a boolean create_missing flag that defaults to false.

Amit Kapila sent in another revision of a patch to implement parallel
seq scan.

Pavel Stehule sent in a patch to allow SET ROLE TO to tab-complete in
psql, just as SET ROLE now does.

Fabrízio de Royes Mello sent in a patch to fix some spacing in the
ALTER TABLE documentation.

CharSyam sent in a patch to change magic constants to DEFINE value for

Noah Misch sent in a patch to fix some tar name issues.

Fabien COELHO sent in another revision of a patch to extend pgbench
expressions with functions.

Petr Korobeinikov sent in another revision of a patch to add support
for \ev viewname and \sv viewname to psql.


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

Dave Page | 22 May 15:14 2015

PostgreSQL 9.4.2, 9.3.7, 9.2.11, 9.1.16, and 9.0.20 released

The PostgreSQL Global Development Group has released an update with
multiple functionality and security fixes to all supported versions of
the PostgreSQL database system, which includes minor versions 9.4.2,
9.3.7, 9.2.11, 9.1.16, and 9.0.20. The update contains a critical fix
for a potential data corruption issue in PostgreSQL 9.3 and 9.4; users
of those versions should update their servers at the next possible

Data Corruption Fix

For users of PostgreSQL versions 9.3 or 9.4, this release fixes a
problem where the database will fail to protect against "multixact
wraparound", resulting in data corruption or loss. Users with a high
transaction rate (1 million or more per hour) in a database with many
foreign keys are especially vulnerable. We strongly urge all users of
9.4 and 9.3 to update their installations in the next few days.

Users of versions 9.2 and earlier are not affected by this issue.

Security Fixes

This update fixes three security vulnerabilities reported in
PostgreSQL over the past few months.  Nether of these issues is seen
as particularly urgent. However, users should examine them in case
their installations are vulnerable:

* CVE-2015-3165 (
  Double "free" after authentication timeout.
* CVE-2015-3166 (
  Unanticipated errors from the standard library.
* CVE-2015-3167 (
  pgcrypto has multiple error messages for decryption with an incorrect key.

Additionally, we are recommending that all users who use Kerberos,
GSSAPI, or SSPI authentication set include_realm to 1 in pg_hba.conf,
which will become the default in future versions.

More information about these issues, as well as older patched issues,
is available on the PostgreSQL Security Page.

Other Fixes and Improvements

A new, non-default version of the citext extension fixes its
previously undocumented regexp_matches() functions to align with the
ordinary text version of those functions. The fixed version has a
different return type than the old version, so users of CIText should
test their applications before updating the function by running "ALTER

In addition to the above, more than 50 reported issues have been fixed
in this cumulative update release.  Most of the issues named affect
all supported versions.  These fixes include:

* Render infinite dates and timestamps as infinity when converting to json
* Fix json/jsonb's populate_record() and to_record()
* Fix incorrect checking of deferred exclusion constraints
* Improve planning of star-schema-style queries
* Fix three issues with joins
* Ensure correct locking with security barrier views
* Fix deadlock at startup when max_prepared_transactions is too small
* Recursively fsync() the data directory after a crash
* Fix autovacuum launcher's possible failure to shut down
* Cope with unexpected signals in LockBufferForCleanup()
* Fix crash when doing COPY IN to a table with check constraints
* Avoid waiting for synchronous replication of read-only transactions
* Fix two issues with hash indexes
* Prevent memory leaks in GIN index vacuum
* Fix two issues with background workers
* Several fixes to Logical Decoding replication
* Fix several minor issues with pg_dump and pg_upgrade

This release includes an update to tzdata release 2015d, with updates
to Egypt, Mongolia, and Palestine, plus historical changes in Canada
and Chile.

9.0 EOL Soon

Version 9.0 will become End-Of-Life in September 2015.  This means
that this update is likely to be the next-to-last update for that
version.  Users of PostgreSQL 9.0 should start planning to upgrade to
a more current version before then.  See our versioning policy
( for more information
about EOL dates.

As with other minor releases, users are not required to dump and
reload their database or use pg_upgrade in order to apply this update
release; you may simply shut down PostgreSQL and update its binaries.
Users of the CIText extension need to run a command. Users who have
skipped multiple update releases may need to perform additional
post-update steps; see the Release Notes for details.

  * Download (
  * Release Notes (
  * Security Page (

Dave Page
PostgreSQL Core Team


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

David Fetter | 18 May 01:36 2015

== PostgreSQL Weekly News - May 17 2015 ==

== PostgreSQL Weekly News - May 17 2015 ==

There is an Italian language channel for PostgreSQL content.

The CLI Latinamerican Conference in Informatics will be held in
Arequipa, Peru, from October 19th to 23rd, 2015.

Informatica 2016, which will be held in Cuba, is looking for
PostgreSQL content for its open source workshop.

PostgreSQL Conference Europe 2015 will be held on October 27-30 in the
Vienna Marriott Hotel, in Vienna, Austria.  The CfP is open until
August 7.

SwissPUG is now open to new members.

== PostgreSQL Product News ==

pgBadger 7.0, a parallel PostgreSQL log analyzer written in Perl, released:

Postgres Toolkit 0.2 released.

== PostgreSQL Jobs for May ==

== PostgreSQL Local ==

The second Swiss Postgres Conference will be held June 25-26, 2015 at
HSR Rapperswil.

There is a Postgres track in a database technology conference(DTCC) in
April 18, 2015 in Beijing, China.

pgDay Paris will be held in Paris France on April 21, 2015.

PGDay in Belfort, France will be held June 2, 2015.

PGCon 2015 is June 16-20 in Ottawa, Canada.

PGDay UK, Conference will be taking place on 7th July 2015 – it is aimed at
the UK PostgreSQL Community.  The CfP is open until 13 April 2015.

PGDay Campinas 2015 will take place in Campinas on August 7.
The CfP is open through May 31.

The Call For Papers for PostgresOpen 2015, being held in Dallas, Texas
from September 16th to 18th, is now open.

The CfP for PostgreSQL Session #7, September 24th, 2015 in Paris,
France, is open until June 15, 2015.  call-for-paper <AT>
postgresql-sessions <DOT> org.

PGConf Silicon Valley 2015 is November 17-18 at the South San
Francisco Convention Center.  The CfP is open through June 15.

== PostgreSQL in the News ==

Planet PostgreSQL:

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david <at>, German language
to pwn <at>, Italian language to pwn <at>  Spanish language
to pwn <at>

== Applied Patches ==

Robert Haas pushed:

- Fix DetermineSafeOldestOffset for the case where there are no
  mxacts.  Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c failed to
  take into account the possibility that there might be no multixacts
  in existence at all.  Report by Thomas Munro; patch by me.

- Advance the stop point for multixact offset creation only at
  checkpoint.  Commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c
  advanced the stop point at vacuum time, but this has subsequently
  been shown to be unsafe as a result of analysis by myself and Thomas
  Munro and testing by Thomas Munro.  The crux of the problem is that
  the SLRU deletion logic may get confused about what to remove if, at
  exactly the right time during the checkpoint process, the head of
  the SLRU crosses what used to be the tail.  This patch, by me, fixes
  the problem by advancing the stop point only following a checkpoint.
  This has the additional advantage of making the removal logic work
  during recovery more like the way it works during normal running,
  which is probably good.  At least one of the calls to
  DetermineSafeOldestOffset which this patch removes was already dead,
  because MultiXactAdvanceOldest is called only during recovery and
  DetermineSafeOldestOffset was set up to do nothing during recovery.
  That, however, is inconsistent with the principle that recovery and
  normal running should work similarly, and was confusing to boot.
  Along the way, fix some comments that previous patches in this area
  neglected to update.  It's not clear to me whether there's any
  concrete basis for the decision to use only half of the multixact ID
  space, but it's neither necessary nor sufficient to prevent
  multixact member wraparound, so the comments should not say

- Even when autovacuum=off, force it for members as we do in other
  cases.  Thomas Munro, with some adjustments by me.

- Increase threshold for multixact member emergency autovac to 50%.
  Analysis by Noah Misch shows that the 25% threshold set by commit
  53bb309d2d5a9432d2602c93ed18e58bd2924e15 is lower than any other,
  similar autovac threshold.  While we don't know exactly what value
  will be optimal for all users, it is better to err a little on the
  high side than on the low side.  A higher value increases the risk
  that users might exhaust the available space and start seeing errors
  before autovacuum can clean things up sufficiently, but a user who
  hits that problem can compensate for it by reducing
  autovacuum_multixact_freeze_max_age to a value dependent on their
  average multixact size.  On the flip side, if the emergency cap
  imposed by that patch kicks in too early, the user will experience
  excessive wraparound scanning and will be unable to mitigate that
  problem by configuration.  The new value will hopefully reduce the
  risk of such bad experiences while still providing enough headroom
  to avoid multixact member exhaustion for most users.  Along the way,
  adjust the documentation to reflect the effects of commit
  04e6d3b877e060d8445eb653b7ea26b1ee5cec6b, which taught autovacuum to
  run for multixact wraparound even when autovacuum is configured off.

- Remove useless assertion.  Here, snapshot->xcnt is an unsigned type,
  so it will always be non-negative.

- Extend abbreviated key infrastructure to datum tuplesorts.  Andrew
  Gierth, reviewed by Peter Geoghegan and by me.

- Fix comment.  Commit 78efd5c1edb59017f06ef96773e64e6539bfbc86
  overlooked this.  Report by Peter Geoghegan.

- doc: CREATE FOREIGN TABLE now allows CHECK ( ... ) NO INHERIT.
  Etsuro Fujita

Bruce Momjian pushed:

- pg_dump:  suppress "Tablespace:" comment for default tablespaces.
  Report by Hans Ginzel

- docs:  add "serialization anomaly" to transaction isolation table.
  Also distinguish between SQL-standard and Postgres behavior.  Report
  by David G. Johnston

- initdb:  only recommend pg_ctl to start the server.  Previously we
  mentioned the 'postgres' binary method as well.

- pg_upgrade:  use single or double-quotes in command-line strings.
  This is platform-dependent.

- doc:  prevent SGML 'make check' from building temp install.  Report
  by Alvaro Herrera

- doc build:  use unique Makefile variable to control temp install

- doc:  list bigint as mapping to int8 and int64.  Report by Paul

- pg_upgrade:  make controldata checks more consistent.  Also add
  missing float8_pass_by_value check.

- docs:  consistently uppercase index method and add spacing.
  Consistently uppercase index method names, e.g. GIN, and add space
  after the index method name and the parentheses enclosing the column

- pg_upgrade:  only allow template0 to be non-connectable.  This patch
  causes pg_upgrade to error out during its check phase if: (1)
  template0 is marked connectable.  or (2) any other database is
  marked non-connectable.  This is done because, in the first case,
  pg_upgrade would fail because the pg_dumpall --globals restore would
  fail, and in the second case, the database would not be restored,
  leading to data loss.  Report by Matt Landry (1), Stephen Frost (2).
  Backpatch through 9.0

- pg_upgrade:  force timeline 1 in the new cluster.  Previously, this
  prevented promoted standby servers from being upgraded because of a
  missing WAL history file.  (Timeline 1 doesn't need a history file,
  and we don't copy WAL files anyway.) Report by Christian Echerer(?),
  Alexey Klyukin.  Backpatch through 9.0

- pg_upgrade:  no need to check for matching float8_pass_by_value.
  Report by Noah Misch

Tom Lane pushed:

- Fix incorrect checking of deferred exclusion constraint after a HOT
  update.  If a row that potentially violates a deferred exclusion
  constraint is HOT-updated later in the same transaction, the
  exclusion constraint would be reported as violated when the check
  finally occurs, even if the row(s) the new row originally conflicted
  with have since been removed.  This happened because the wrong TID
  was passed to check_exclusion_constraint(), causing the live
  HOT-updated row to be seen as a conflicting row rather than
  recognized as the row-under-test.  Per bug #13148 from Evan Martin.
  It's been broken since exclusion constraints were invented, so
  back-patch to all supported branches.

- Add support for doing late row locking in FDWs.  Previously, FDWs
  could only do "early row locking", that is lock a row as soon as
  it's fetched, even though local restriction/join conditions might
  discard the row later.  This patch adds callbacks that allow FDWs to
  do late locking in the same way that it's done for regular tables.
  To make use of this feature, an FDW must support the "ctid" column
  as a unique row identifier.  Currently, since ctid has to be of type
  TID, the feature is of limited use, though in principle it could be
  used by postgres_fdw.  We may eventually allow FDWs to specify
  another data type for ctid, which would make it possible for more
  FDWs to use this feature.  This commit does not modify postgres_fdw
  to use late locking.  We've tested some prototype code for that, but
  it's not in committable shape, and besides it's quite unclear
  whether it actually makes sense to do late locking against a remote
  server.  The extra round trips required are likely to outweigh any
  benefit from improved concurrency.  Etsuro Fujita, reviewed by
  Ashutosh Bapat, and hacked up a lot by me

- Fix postgres_fdw to return the right ctid value in EvalPlanQual
  cases.  If a postgres_fdw foreign table is a non-locked source
  relation in an UPDATE, DELETE, or SELECT FOR UPDATE/SHARE, and the
  query selects its ctid column, the wrong value would be returned if
  an EvalPlanQual recheck occurred.  This happened because the foreign
  table's result row was copied via the ROW_MARK_COPY code path, and
  EvalPlanQualFetchRowMarks just unconditionally set the reconstructed
  tuple's t_self to "invalid".  To fix that, we can have
  EvalPlanQualFetchRowMarks copy the composite datum's t_ctid field,
  and be sure to initialize that along with t_self when postgres_fdw
  constructs a tuple to return.  If we just did that much then
  EvalPlanQualFetchRowMarks would start returning "(0,0)" as ctid for
  all other ROW_MARK_COPY cases, which perhaps does not matter much,
  but then again maybe it might.  The cause of that is that
  heap_form_tuple, which is the ultimate source of all composite
  datums, simply leaves t_ctid as zeroes in newly constructed tuples.
  That seems like a bad idea on general principles: a field that's
  really not been initialized shouldn't appear to have a valid value.
  So let's eat the trivial additional overhead of doing
  "ItemPointerSetInvalid(&(td->t_ctid))" in heap_form_tuple.  This
  closes out our handling of Etsuro Fujita's report that tableoid and
  ctid weren't correctly set in postgres_fdw EvalPlanQual cases.
  Along the way we did a great deal of work to improve FDWs' ability
  to control row locking behavior; which was not wasted effort by any
  means, but it didn't end up being a fix for this problem because
  that feature would be too expensive for postgres_fdw to use all the
  time.  Although the fix for the tableoid misbehavior was
  back-patched, I'm hesitant to do so here; it seems far less likely
  that people would care about remote ctid than tableoid, and even
  such a minor behavioral change as this in heap_form_tuple is perhaps
  best not back-patched.  So commit to HEAD only, at least for the
  moment.  Etsuro Fujita, with some adjustments by me

- Fix distclean/maintainer-clean targets to remove top-level
  tmp_install dir.  The top-level makefile removes tmp_install in its
  "clean" target, but the distclean and maintainer-clean targets
  overlooked that (and they don't simply invoke clean, because that
  would result in an extra tree traversal).  While at it, let's just
  make sure that removing GNUmakefile itself is the very last step of
  the recipe.

- Support "expanded" objects, particularly arrays, for better
  performance.  This patch introduces the ability for complex
  datatypes to have an in-memory representation that is different from
  their on-disk format.  On-disk formats are typically optimized for
  minimal size, and in any case they can't contain pointers, so they
  are often not well-suited for computation.  Now a datatype can
  invent an "expanded" in-memory format that is better suited for its
  operations, and then pass that around among the C functions that
  operate on the datatype.  There are also provisions (rudimentary as
  yet) to allow an expanded object to be modified in-place under
  suitable conditions, so that operations like assignment to an
  element of an array need not involve copying the entire array.  The
  initial application for this feature is arrays, but it is not hard
  to foresee using it for other container types like JSON, XML and
  hstore.  I have hopes that it will be useful to PostGIS as well.  In
  this initial implementation, a few heuristics have been hard-wired
  into plpgsql to improve performance for arrays that are stored in
  plpgsql variables.  We would like to generalize those hacks so that
  other datatypes can obtain similar improvements, but figuring out
  some appropriate APIs is left as a task for future work.  (The
  heuristics themselves are probably not optimal yet, either, as they
  sometimes force expansion of arrays that would be better left
  alone.) Preliminary performance testing shows impressive speed gains
  for plpgsql functions that do element-by-element access or update of
  large arrays.  There are other cases that get a little slower, as a
  result of added array format conversions; but we can hope to improve
  anything that's annoyingly bad.  In any case most applications
  should see a net win.  Tom Lane, reviewed by Andres Freund

- Suppress uninitialized-variable warning.

- Docs: fix erroneous claim about max byte length of GB18030.  This
  encoding has characters up to 4 bytes long, not 2.

- Fix portability issue in pg_audit.  "%ld" is not a portable way to
  print int64's.  This may explain the buildfarm crashes we're seeing
  --- it seems to make dromedary happy, at least.

- Teach UtfToLocal/LocalToUtf to support algorithmic encoding
  conversions.  Until now, these functions have only supported
  encoding conversions using lookup tables, which is fine as long as
  there's not too many code points to convert.  However, GB18030
  expects all 1.1 million Unicode code points to be convertible, which
  would require a ridiculously-sized lookup table.  Fortunately, a
  large fraction of those conversions can be expressed through
  arithmetic, ie the conversions are one-to-one in certain defined
  ranges.  To support that, provide a callback function that is used
  after consulting the lookup tables.  (This patch doesn't actually
  change anything about the GB18030 conversion behavior, just provide
  infrastructure for fixing it.) Since this requires changing the APIs
  of UtfToLocal/LocalToUtf anyway, take the opportunity to rearrange
  their argument lists into what seems to me a saner order.  And
  beautify the call sites by using lengthof() instead of error-prone
  sizeof() arithmetic.  In passing, also mark all the lookup tables
  used by these calls "const".  This moves an impressive amount of
  stuff into the text segment, at least on my machine, and is safer

- Honor traditional SGML NAMELEN limit.  We've conformed to this limit
  in the past, so might as well continue to.  Aaron Swenson

- Fix insufficiently-paranoid GB18030 encoding verifier.  The previous
  coding effectively only verified that the second byte of a multibyte
  character was in the expected range; moreover, it wasn't careful to
  make sure that the second byte even exists in the buffer before
  touching it.  The latter seems unlikely to cause any real problems
  in the field (in particular, it could never be a problem with
  null-terminated input), but it's still a bug.  Since GB18030 is not
  a supported backend encoding, the only thing we'd really be doing
  with GB18030 text is converting it to UTF8 in LocalToUtf, which
  would fail anyway on any invalid character for lack of a match in
  its lookup table.  So the only user-visible consequence of this
  change should be that you'll get "invalid byte sequence for
  encoding" rather than "character has no equivalent" for malformed
  GB18030 input.  However, impending changes to the GB18030 conversion
  code will require these tighter up-front checks to avoid producing
  bogus results.

- Fix outdated src/test/mb/ tests, and add a GB18030 test.  The
  expected-output files for these tests were broken by the recent
  addition of a warning for hash indexes.  Update them.  Also add a
  test case for GB18030 encoding, similar to the other ones.  This is
  a pretty weak test, but it's better than nothing.

- Extend GB18030 encoding conversion to cover full Unicode range.  Our
  previous code for GB18030 <-> UTF8 conversion only covered Unicode
  code points up to U+FFFF, but the actual spec defines conversions
  for all code points up to U+10FFFF.  That would be rather
  impractical as a lookup table, but fortunately there is a simple
  algorithmic conversion between the additional code points and the
  equivalent GB18030 byte patterns.  Make use of the just-added
  callback facility in LocalToUtf/UtfToLocal to perform the additional
  conversions.  Having created the infrastructure to do that, we can
  use the same code to map certain linearly-related subranges of the
  Unicode space below U+FFFF, allowing removal of the corresponding
  lookup table entries.  This more than halves the lookup table size,
  which is a substantial savings; drops from
  nearly a megabyte to about half that.  In support of doing that,
  replace ISO10646-GB18030.TXT with the data file gb-18030-2000.xml
  (retrieved from
  ) in which these subranges have been deleted from the simple lookup
  entries.  Per bug #12845 from Arjen Nienhuis.  The conversion code
  added here is based on his proposed patch, though I whacked it
  around rather heavily.

- Fix uninitialized variable.  Per compiler warnings.

- Improve test for CONVERT() with GB18030 <-> UTF8.  Add a bit of
  coverage of high code points.  Arjen Nienhuis

- Update time zone data files to tzdata release 2015d.  DST law
  changes in Egypt, Mongolia, Palestine.  Historical corrections for
  Canada and Chile.  Revised zone abbreviation for America/Adak

- Avoid direct use of INFINITY.  It's not very portable.  Per

- More portability fixing for bipartite_match.c.  <float.h> is
  required for isinf() on some platforms.  Per buildfarm.

- Fix docs typo.  I don't think "respectfully" is what was meant here...

- First-draft release notes for 9.4.2 et al.  As usual, the release
  notes for older branches will be made by cutting these down, but put
  them up for community review first.

- Release notes for 9.4.2, 9.3.7, 9.2.11, 9.1.16, 9.0.20.

Stephen Frost pushed:

  RowExclusiveLock during normal operation and therefore it makes
  sense to allow LOCK TABLE .. ROW EXCLUSIVE MODE to be executed by
  users who have INSERT rights on a table (even if they don't have
  UPDATE or DELETE).  Not back-patching this as it's a behavior change
  which, strictly speaking, loosens security restrictions.  Per
  discussion with Tom and Robert (circa 2013).

- pgbench: Don't fail during startup.  In pgbench, report, but ignore,
  any errors returned when attempting to vacuum/truncate the default
  tables during startup.  If the tables are needed, we'll error out
  soon enough anyway.  Per discussion with Tatsuo, David Rowley, Jim
  Nasby, Robert, Andres, Fujii, Fabrízio de Royes Mello, Tomas Vondra,
  Michael Paquier, Peter, based on a suggestion from Jeff Janes, patch
  from Robert, additional message wording from Tom.

- Fix buildfarm with regard to pg_audit.  Remove the check that
  pg_audit be installed by shared_preload_libraries as that's not
  going to work when running the regressions tests in the buildfarm.
  That check was primairly a nice to have and isn't required anyway.

- Further fixes for the buildfarm for pg_audit.  The database built by
  the buildfarm is specific to the extension, use \connect - instead.

- Further fixes for the buildfarm for pg_audit.  Also, use a function
  to load the extension ahead of all other calls, simulating load from
  shared_libraries_preload, to make sure the hooks are in place before
  logging start.

- Improve pg_audit regression tests.  Instead of creating a new
  superuser role, extract out what the current user is and use that
  user instead.  Further, clean up and drop all objects created by the
  regression test.  Pointed out by Tom.

- Remove useless pg_audit.conf.  No need to have pg_audit.conf any
  longer since the regression tests are just loading the module at the
  start of each session (to simulate being in
  shared_preload_libraries, which isn't something we can actually make
  happen on the buildfarm itself, it seems).  Pointed out by Tom

- Make repeated 'make installcheck' runs work.  In pg_audit, set
  client_min_messages up to warning, then reset the role attributes,
  to completely reset the session while not making the regression
  tests depend on being run by any particular user.

- pg_audit Makefile, REINDEX changes.  Clean up the Makefile, per
  Michael Paquier.  Classify REINDEX as we do in core, use '1.0' for
  the version, per Fujii.

- Add pg_audit, an auditing extension.  This extension provides
  detailed logging classes, ability to control logging at a per-object
  level, and includes fully-qualified object names for logged
  statements (DML and DDL) in independent fields of the log output.
  Authors: Ian Barwick, Abhijit Menon-Sen, David Steele Reviews by:
  Robert Haas, Tatsuo Ishii, Sawada Masahiko, Fujii Masao, Simon
  Riggs.  Discussion with: Josh Berkus, Jaime Casanova, Peter
  Eisentraut, David Fetter, Yeb Havinga, Alvaro Herrera, Petr Jelinek,
  Tom Lane, MauMau, Bruce Momjian, Jim Nasby, Michael Paquier,
  Fabrízio de Royes Mello, Neil Tiffin.

Peter Eisentraut pushed:

- Replace some appendStringInfo* calls with more appropriate variants.
  Author: David Rowley <dgrowleyml <at>>

- PL/Python: Remove procedure cache invalidation.  This was added to
  react to changes in the pg_transform catalog, but building with
  CLOBBER_CACHE_ALWAYS showed that PL/Python was not prepared for
  having its procedure cache cleared.  Since this is a marginal use
  case, and we don't do this for other catalogs anyway, we can
  postpone this to another day.

- Add pg_settings.pending_restart column.  with input from David G.
  Johnston, Robert Haas, Michael Paquier

- Fix whitespace

- hstore_plpython: Fix regression tests under Python 3

Álvaro Herrera pushed:

- "Fix" test_ddl_deparse regress test schedule.  MSVC is not smart
  enough to figure it out, so dumb down the Makefile and remove the
  schedule file.  Also add a .gitignore file.  Author: Michael Paquier

- Allow on-the-fly capture of DDL event details.  This feature lets
  user code inspect and take action on DDL events.  Whenever a
  ddl_command_end event trigger is installed, DDL actions executed are
  saved to a list which can be inspected during execution of a
  function attached to ddl_command_end.  The set-returning function
  pg_event_trigger_ddl_commands can be used to list actions so
  captured; it returns data about the type of command executed, as
  well as the affected object.  This is sufficient for many uses of
  this feature.  For the cases where it is not, we also provide a
  "command" column of a new pseudo-type pg_ddl_command, which is a
  pointer to a C structure that can be accessed by C code.  The struct
  contains all the info necessary to completely inspect and even
  reconstruct the executed command.  There is no actual deparse code
  here; that's expected to come later.  What we have is enough
  infrastructure that the deparsing can be done in an external
  extension.  The intention is that we will add some deparsing code in
  a later release, as an in-core extension.  A new test module is
  included.  It's probably insufficient as is, but it should be
  sufficient as a starting point for a more complete and future-proof
  approach.  Authors: Álvaro Herrera, with some help from Andres
  Freund, Ian Barwick, Abhijit Menon-Sen.  Reviews by Andres Freund,
  Robert Haas, Amit Kapila, Michael Paquier, Craig Ringer, David
  Steele.  Additional input from Chris Browne, Dimitri Fontaine,
  Stephen Frost, Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer,
  Pavel Stěhule.  Based on original work by Dimitri Fontaine, though I
  didn't use his code.  Discussion: <at> <at> <at>

- Move strategy numbers to include/access/stratnum.h.  For upcoming
  BRIN opclasses, it's convenient to have strategy numbers defined in
  a single place.  Since there's nothing appropriate, create it.  The
  StrategyNumber typedef now lives there, as well as existing strategy
  numbers for B-trees (from skey.h) and R-tree-and-friends (from
  gist.h).  skey.h is forced to include stratnum.h because of the
  StrategyNumber typedef, but gist.h is not; extensions that currently
  rely on gist.h for rtree strategy numbers might need to add a new A
  few .c files can stop including skey.h and/or gist.h, which is a
  nice side benefit.  Per discussion: <at>
  Authored by Emre Hasegeli and Álvaro.  (It's not clear to me why
  bootscanner.l has any #include lines at all.)

- Add BRIN infrastructure for "inclusion" opclasses.  This lets BRIN
  be used with R-Tree-like indexing strategies.  Also provided are
  operator classes for range types, box and inet/cidr.  The
  infrastructure provided here should be sufficient to create operator
  classes for similar datatypes; for instance, opclasses for PostGIS
  geometries should be doable, though we didn't try to implement one.
  (A box/point opclass was also submitted, but we ripped it out before
  commit because the handling of floating point comparisons in
  existing code is inconsistent and would generate corrupt indexes.)
  Author: Emre Hasegeli.  Cosmetic changes by me.  Review: Andreas

Andrew Dunstan pushed:

- pg_basebackup -F t now succeeds with a long symlink target

- Map basebackup tablespaces using a tablespace_map file.  Windows
  can't reliably restore symbolic links from a tar format, so instead
  during backup start we create a tablespace_map file, which is used
  by the restoring postgres to create the correct links in pg_tblspc.
  The backup protocol also now has an option to request this file to
  be included in the backup stream, and this is used by pg_basebackup
  when operating in tar mode.  This is done on all platforms, not just
  Windows.  This means that pg_basebackup will not not work in tar
  mode against 9.4 and older servers, as this protocol option isn't
  implemented there.  Amit Kapila, reviewed by Dilip Kumar, with a
  little editing from me.

- Fix some errors from jsonb functions patch.  The catalog version
  should have been bumped, and the alternative regression result file
  was not up to date with the name of jsonb_pretty.

- Fix jsonb replace and delete on scalars and empty structures.  These
  operations now error out if attempted on scalars, and simply return
  the input if attempted on empty arrays or objects. Along the way we
  remove the unnecessary cloning of the input when it's known to be
  unchanged. Regression tests covering these cases are added.

- Additional functions and operators for jsonb.  jsonb_pretty(jsonb)
  produces nicely indented json output.  jsonb || jsonb concatenates
  two jsonb values.  jsonb - text removes a key and its associated
  value from the json jsonb - int removes the designated array element
  jsonb - text[] removes a key and associated value or array element
  at the designated path jsonb_replace(jsonb,text[],jsonb) replaces
  the array element designated by the path or the value associated
  with the key designated by the path with the given value.  Original
  work by Dmitry Dolgov, adapted and reworked for PostgreSQL core by
  Andrew Dunstan, reviewed and tidied up by Petr Jelinek.

Andres Freund pushed:

- Fix ON CONFLICT bugs that manifest when used in rules.  Specifically
  the tlist and rti of the pseudo "excluded" relation weren't properly
  treated by expression_tree_walker, which lead to errors when
  excluded was referenced inside a rule because the varnos where not
  properly adjusted.  Similar omissions in OffsetVarNodes and
  expression_tree_mutator had less impact, but should obviously be
  fixed nonetheless.  A couple tests of for ON CONFLICT UPDATE into
  INSERT rule bearing relations have been added.  In passing I updated
  a couple comments.

- Add pgstattuple_approx() to the pgstattuple extension.  The new
  function allows to estimate bloat and other table level statics in a
  faster, but approximate, way. It does so by using information from
  the free space map for pages marked as all visible in the visibility
  map. The rest of the table is actually read and free space/bloat is
  measured accurately.  In many cases that allows to get bloat
  information much quicker, causing less IO.  Author: Abhijit
  Menon-Sen Reviewed-By: Andres Freund, Amit Kapila and Tomas Vondra
  Discussion: 20140402214144.GA28681 <at>

- Support GROUPING SETS, CUBE and ROLLUP.  This SQL standard
  functionality allows to aggregate data by different GROUP BY clauses
  at once. Each grouping set returns rows with columns grouped by in
  other sets set to NULL.  This could previously be achieved by doing
  each grouping as a separate query, conjoined by UNION ALLs. Besides
  being considerably more concise, grouping sets will in many cases be
  faster, requiring only one scan over the underlying data.  The
  current implementation of grouping sets only supports using sorting
  for input. Individual sets that share a sort order are computed in
  one pass. If there are sets that don't share a sort order,
  additional sort & aggregation steps are performed. These additional
  passes are sourced by the previous sort step; thus avoiding repeated
  scans of the source data.  The code is structured in a way that
  adding support for purely using hash aggregation or a mix of hashing
  and sorting is possible. Sorting was chosen to be supported first,
  as it is the most generic method of implementation.  Instead of, as
  in an earlier versions of the patch, representing the chain of sort
  and aggregation steps as full blown planner and executor nodes, all
  but the first sort are performed inside the aggregation node itself.
  This avoids the need to do some unusual gymnastics to handle having
  to return aggregated and non-aggregated tuples from underlying
  nodes, as well as having to shut down underlying nodes early to
  limit memory usage.  The optimizer still builds Sort/Agg node to
  describe each phase, but they're not part of the plan tree, but
  instead additional data for the aggregation node. They're a
  convenient and preexisting way to describe aggregation and sorting.
  The first (and possibly only) sort step is still performed as a
  separate execution step. That retains similarity with existing group
  by plans, makes rescans fairly simple, avoids very deep plans
  (leading to slow explains) and easily allows to avoid the sorting
  step if the underlying data is sorted by other means.  A somewhat
  ugly side of this patch is having to deal with a grammar ambiguity
  between the new CUBE keyword and the cube extension/functions named
  cube (and rollup). To avoid breaking existing deployments of the
  cube extension it has not been renamed, neither has cube been made a
  reserved keyword. Instead precedence hacking is used to make GROUP
  BY cube(..) refer to the CUBE grouping sets feature, and not the
  function cube(). To actually group by a function cube(), unlikely as
  that might be, the function name has to be quoted.  Needs a
  catversion bump because stored rules may change.  Author: Andrew
  Gierth and Atri Sharma, with contributions from Andres Freund
  Reviewed-By: Andres Freund, Noah Misch, Tom Lane, Svenne Krap, Tomas
  Vondra, Erik Rijkers, Marti Raudsepp, Pavel Stehule Discussion:
  CAOeZVidmVRe2jU6aMk_5qkxnB7dfmPROzM7Ur8JPW5j8Y5X-Lw <at>

Fujii Masao pushed:

- Support VERBOSE option in REINDEX command.  When this option is
  specified, a progress report is printed as each index is reindexed.
  Per discussion, we agreed on the following syntax for the
  extensibility of the options.  REINDEX (flexible options) { INDEX |
  ... } name Sawada Masahiko.  Reviewed by Robert Haas, Fabrízio
  Mello, Alvaro Herrera, Kyotaro Horiguchi, Jim Nasby and me.
  CAD21AoA0pK3YcOZAFzMae+2fcc3oGp5zoRggDyMNg5zoaWDhdQ <at>

- Support --verbose option in reindexdb.  Sawada Masahiko, reviewed by
  Fabrízio Mello

Simon Riggs pushed:

- Separate block sampling functions.  Refactoring ahead of tablesample
  patch.  Requested and reviewed by Michael Paquier.  Petr Jelinek

- TABLESAMPLE, SQL Standard and extensible.  Add a TABLESAMPLE clause
  to SELECT statements that allows user to specify random BERNOULLI
  sampling or block level SYSTEM sampling. Implementation allows for
  extensible sampling functions to be written, using a standard API.
  Basic version follows SQLStandard exactly. Usable concrete use cases
  for the sampling API follow in later commits.  Petr Jelinek.
  Reviewed by Michael Paquier and Simon Riggs

- TABLESAMPLE system_rows(limit).  Contrib module implementing a
  tablesample method that allows you to limit the sample by a hard row
  limit.  Petr Jelinek.  Reviewed by Michael Paquier, Amit Kapila and
  Simon Riggs

- contrib/tsm_system_rows

- TABLESAMPLE system_time(limit).  Contrib module implementing a
  tablesample method that allows you to limit the sample by a hard
  time limit.  Petr Jelinek.  Reviewed by Michael Paquier, Amit Kapila
  and Simon Riggs

- Tablesample method API docs.  Petr Jelinek

- SQL Standard feature T613 Sampling now supported.

- Add to contrib/Makefile

- Add docs for tablesample system_rows().

- Add docs for tablesample system_time().

- contrib/tsm_system_time

Heikki Linnakangas pushed:

- Allow GiST distance function to return merely a lower-bound.  The
  distance function can now set *recheck = false, like index quals.
  The executor will then re-check the ORDER BY expressions, and use a
  queue to reorder the results on the fly.  This makes it possible to
  do kNN-searches on polygons and circles, which don't store the exact

- Fix datatype confusion with the new lossy GiST distance functions.
  We can only support a lossy distance function when the distance
  function's datatype is comparable with the original ordering
  operator's datatype.  The distance function always returns a float8,
  so we are limited to float8, and float4 (by a hard-coded cast of the
  float8 to float4).  In light of this limitation, it seems like a
  good idea to have a separate 'recheck' flag for the ORDER BY
  expressions, so that if you have a non-lossy distance function, it
  still works with lossy quals. There are cases like that with the
  build-in or contrib opclasses, but it's plausible.  There was a
  hidden assumption that the ORDER BY values returned by GiST match
  the original ordering operator's return type, but there are plenty
  of examples where that's not true, e.g. in btree_gist and pg_trgm.
  As long as the distance function is not lossy, we can tolerate that
  and just not return the distance to the executor (or rather, always
  return NULL). The executor doesn't need the distances if there are
  no lossy results.  There was another little bug: the recheck
  variable was not initialized before calling the distance function.
  That revealed the bigger issue, as the executor tried to reorder
  tuples that didn't need reordering, and that failed because of the
  datatype mismatch.

- Fix docs build. Oops.

- Add archive_mode='always' option.  In 'always' mode, the standby
  independently archives all files it receives from the primary.
  Original patch by Fujii Masao, docs and review by me.

- Silence another create_index regression test failure.  More platform
  differences in the less-significant digits in output.  Per buildfarm
  member rover_firefly, still.

- Silence create_index regression test failure.  The expected output
  contained some floating point values which might get rounded
  slightly differently on different platforms. The exact output isn't
  very interesting in this test, so just round it.  Per buildfarm
  member rover_firefly.

  value in the index, but just a bounding box.  Alexander Korotkov and

Magnus Hagander pushed:

- Minor docs fixes for pg_audit.  Peter Geoghegan

- Fix typos in comments.  Dmitriy Olshevskiy

- Fix wording error caused by recent typo fixes.  It wasn't just a
  typo, but bad wording. This should make it more clear. Pointed out
  by Tom Lane.

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

Andres Freund sent in another revision of a patch to manipulate
complex types as non-contiguous structures in-memory.

Michael Paquier sent in a patch to improve regression tests to check
LOCK TABLE and table permissions.

Michael Paquier sent in a patch to remove leftovers after dcae5fac
(improve speed of make check-world) in git tree with check-world.

Abhijit Menon-Sen sent in two more revisions of a patch to create a
fast bloat measurement tool.

Heikki Linnakangas sent in two more revisions of a patch to fix some
interactions between streaming replication and WAL archiving.

Kaigai Kouhei sent in three more revisions of a patch to improve the
custom/foreign join API.

Stephen Frost sent in two more revisions of a patch to add default
roles for functionality.

Álvaro Herrera and Petr Jelinek traded patches to add an access method
for sequences.

Kyotaro HORIGUCHI sent in another revision of a patch to add
multi-column statistics.

Petr A. Korobeinikov sent in another revision of a patch to support
for \ev viewname and \sv viewname in psql.

Fabien COELHO sent in a patch to allow backslash-continuations in
custom scripts.

Etsuro Fujita sent in a doc patch for ALTER FOREIGN TABLE.

Robert Haas sent in a patch to fix the documentation of

Beena Emerson sent in another revision of a patch to add support for N
synchronous standby servers.

Shigeru HANADA sent in another revision of a patch to add postgres_fdw
join pushdown.


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

Markus Wanner | 16 May 12:53 2015

SwissPUG now open for members

Dear PostgreSQL Users,

I'm pleased to announce that we recently founded the Swiss PostgreSQL
Users Group, an association under Swiss law. And we are now ready to
accept further members. To join us, please simply fill our online form here:

The bylaws are available from the same website and the membership fee
was set to CHF 50 per year. If you have any further question, please
don't hesitate to contact us via email to: board <at> or come
talk to us at the next Swiss Postgres Conference, where we briefly
present the association as well. See:

We're looking forward to interested parties from all parts of
Switzerland and hope to help PostgreSQL to thrive and get the
recognition it deserves - in Switzerland, too.

Markus Wanner
for the board


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

Dave Page | 12 May 15:31 2015

Call for Papers - PostgreSQL Conference Europe 2015

PostgreSQL Conference Europe 2015 will be held on October 27-30 in the
Vienna Marriott Hotel, in Vienna, Austria. It will cover topics for
PostgreSQL users, developers and contributors, as well as decision and
policy makers. For more information about the conference, please see
the website at

We are now accepting proposals for talks in English or German. If you
are submitting a talk in German, please write a short summary of the
abstract in English in the "submission notes" field.

Each session will last 45 minutes, and may be on any topic related to
PostgreSQL. Suggested topic areas include:

- Developing applications for PostgreSQL
- Administering large scale PostgreSQL installations
- Case studies and/or success stories of PostgreSQL deployments
- PostgreSQL tools and utilities
- PostgreSQL hacking
- Community & user groups
- Tuning the server
- Migrating from other systems
- Scaling/replication
- Benchmarking & hardware
- PostgreSQL related products

Of course, we're happy to receive proposals for talks on other
PostgreSQL related topics as well.

We may also have a limited number of longer, 90-minute, slots
available. Please indicate clearly in your submission if you wish to
make a 90-minute talk.

Finally, there will be a session of five minute lightning talks. A
separate call for proposals will be made for them further on.

The submission deadline is August 7th. Selected speakers will be
notified before August 24th, 2015.

Please submit your proposals by going to and following the instructions

The proposals will be considered by committee who will produce a
schedule to be published nearer the conference date.

All selected speakers will get free entry to the conference (excluding
training sessions). We do not in general cover travel and
accommodations for speakers, but may be able to do that in limited
cases. If you require assistance with funding to be able to attend,
please make a note of this in the submission notes field.

We look forward to hearing from you, and seeing you in Vienna in October!

Regards, Dave

Dave Page
PostgreSQL Europe


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

Gilles Darold | 12 May 14:16 2015

pgBadger 7.0 is out

Paris, France - Mai 11th, 2014

DALIBO is proud to announce the release of pgBadger 7.0.

pgBadger is a PostgreSQL performance analyzer, built for speed with
fully detailed reports based on your PostgreSQL log files.

This major release adds more useful reports and features. One of the longest
feature request have been also implemented, support to auto_explain
EXPLAIN plan will be added together with top slowest queries when
available in
log files. Thanks to the kind authorization of depesz, a link to
open directly the explain plan on is also

List of other new reports in this release:

  * Events (panic, fatal, error and warning) distribution per 5 minutes.
  * Per application details (total duration and times executed) for each
    query reported in Top Queries. The details are visible from a button
    called "App(s) involved".
  * Cumulated queries durations per user.

Three new report to help identifying queries cancelled on secondaries

  * Number of cancelled queries (graph)
  * Queries generating the most cancellation (N)
  * Queries most cancelled.

For the complete list of changes, please checkout the release note on

===== Links & Credits =====

DALIBO would like to thank the developers who submitted patches and the
users who reported bugs and feature requests, especially Mael Rimbault,
Thomas Reiss, Korriliam, rlowe and Antti Koivisto.

pgBadger is an open project. Any contribution to build a better tool is
welcome. You just have to send your ideas, features requests or patches
using the GitHub tools or directly on our mailing list.

Links :

  * Download :
  * Mailing List :


**About pgBadger** :

pgBagder is a new generation log analyzer for PostgreSQL, created by
Gilles Darold (also author of ora2pg, the powerful migration tool).
pgBadger is a fast and easy tool to analyze your SQL traffic and create
HTML5 reports with dynamics graphs. pgBadger is the perfect tool to
understand the behavior of your PostgreSQL servers and identify which
SQL queries need to be optimized.

Docs, Download & Demo at


**About DALIBO** :

DALIBO is the leading PostgreSQL company in France, providing support,
trainings and consulting to its customers since 2005. The company
contributes to the PostgreSQL community in various ways, including :
code, articles, translations, free conferences and workshops

Check out DALIBO's open source projects at

Gilles Darold -


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

Stephen Frost | 11 May 22:47 2015

PostgresOpen 2015 - Registration Opens, CFP Extended!


Early Bird Registration for PostgresOpen 2015, being held in Dallas,
Texas from September 16th to 18th, is now open!

Simply go to: and register!

The Program Committee is excited to be able to continue providing
PostgresOpen at the same rate as last year, with a $200 discount for
early bird registrations!

We are also pleased to announce that talks will now be accepted up until
May 24th, Anywhere on Earth (AoE), but that still leaves only two weeks
left to submit your talk!

Presentations on any topic related to PostgreSQL including, but not
limited to, case studies, experiences, tools and utilities, migration
stories, existing features, new feature development, benchmarks,
performance tuning will be considered.

Tutorials will be announced in the coming weeks- watch our blog at for updates!

The Program Committee looks forward to bringing the best PostgreSQL
presentations and tutorials from speakers around the world to the
first multi-day PostgreSQL conference in Texas!

Speakers will be notified by June 1, 2015 AoE, with the schedule to be
published once selected speakers have confirmed.

PostgresOpen 2015 is proud to announce Javelin Marketing Group,
OmniTI and Consistent State as sponsors!  Sponsorship opportunities
are still available and the prospectus is available here:

We look forward to seeing everyone in Dallas!

Any questions?  Please contact: program2015 <at>

Stephen Frost
PostgresOpen 2015 Committee Chair
Satoshi Nagayasu | 12 May 03:07 2015

Postgres Toolkit 0.2 released

Hi PostgreSQL folks,

Today, we pleased to announce the release of Postegres Toolkit 0.2.

Postgres Toolkit is a collection of scripts and utilities which is
intended to help PostgreSQL DBA to improve quality and productivity
of their daily jobs and operations.

With having Postgres Toolkit, DBA will be able to avoid from writing
complicated queries and maintaining their own scripts for their daily
operations and do daily DBA jobs. So, the concept of the toolkit is like
"A Victorinox for PostgreSQL DBA."

For more details, please visit a following blog entry.


Satoshi Nagayasu <snaga <at>>


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription:

David Fetter | 11 May 01:34 2015

== PostgreSQL Weekly News - May 10 2015 ==

== PostgreSQL Weekly News - May 10 2015 ==

== PostgreSQL Product News ==

barman 1.4.1, a backup and recovery manager for PostgreSQL, released.

== PostgreSQL Jobs for May ==

== PostgreSQL Local ==

The second Swiss Postgres Conference will be held June 25-26, 2015 at
HSR Rapperswil.

There is a Postgres track in a database technology conference(DTCC) in
April 18, 2015 in Beijing, China.

pgDay Paris will be held in Paris France on April 21, 2015.

PGDay in Belfort, France will be held June 2, 2015.

PGCon 2015 is June 16-20 in Ottawa, Canada.

PGDay UK, Conference will be taking place on 7th July 2015 – it is aimed at
the UK PostgreSQL Community.  The CfP is open until 13 April 2015.

PGDay Campinas 2015 will take place in Campinas on August 7.
The CfP is open through May 31.

The Call For Papers for PostgresOpen 2015, being held in Dallas, Texas
from September 16th to 18th, is now open.

The CfP for PostgreSQL Session #7, September 24th, 2015 in Paris,
Francisc, is open until June 15, 2015.  call-for-paper <AT>
postgresql-sessions <DOT> org.

PGConf Silicon Valley 2015 is November 17-18 at the South San
Francisco Convention Center.  The CfP is open through June 15.

== PostgreSQL in the News ==

Planet PostgreSQL:

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm Pacific time.
Please send English language ones to david <at>, German language
to pwn <at>, Italian language to pwn <at>  Spanish language
to pwn <at>

== Applied Patches ==

Tom Lane pushed:

- Second try at fixing warnings caused by commit 9b43d73b3f9bef27.
  Commit ef3f9e642d2b2bba suppressed one cause of warnings here, but
  recent clang on OS X is still unhappy because we're passing a "long"
  to abs().  The fact that tm_gmtoff is declared as long is no doubt a
  hangover from days when int might be only 16 bits; but Postgres has
  never been able to run on such machines, so we can just cast it to
  int with no worries.  For consistency, also cast to int in the other
  uses of tm_gmtoff in this stanza.  Note: this code is still broken
  on machines that don't follow C99
  integer-division-truncates-towards-zero rules.  Given the lack of
  complaints about it, I don't feel a large desire to complicate
  things enough to cope with the pre-C99 rules.

- Improve procost estimates for some text search functions.  The text
  search functions that involve parsing raw text into lexemes are
  remarkably CPU-intensive, so estimating them at the same cost as
  most other built-in functions seems like a mistake; moreover, doing
  so turns out to discourage the optimizer from using functional
  indexes on these functions.  After some debate, we've agreed to
  raise procost from 1 to 100 for to_tsvector(), plainto_tsvector(),
  to_tsquery(), ts_headline(), ts_match_tt(), and ts_match_tq(), which
  are all the text search functions that parse raw text.  Also
  increase procost for the 2-argument form of ts_rewrite()
  (tsquery_rewrite_query); while this function doesn't do text
  parsing, it does execute a user-supplied SQL query, so its previous
  procost of 1 is clearly a drastic underestimate.  It seems
  reasonable to assign it the same cost we assign to PL functions by
  default, so 100 is the number here too.  I did not bother bumping
  catversion for this change, since it does not break catalog
  compatibility with the server executable nor result in any
  regression test changes.  Per complaint from Andrew Gierth and
  subsequent discussion.

- Fix incorrect declaration of citext's regexp_matches() functions.
  These functions should return SETOF TEXT[], like the core functions
  they are wrappers for; but they were incorrectly declared as
  returning just TEXT[].  This mistake had two results: first, if
  there was no match you got a scalar null result, whereas what you
  should get is an empty set (zero rows).  Second, the 'g' flag was
  effectively ignored, since you would get only one result array even
  if there were multiple matches, as reported by Jeff Certain.  While
  ignoring 'g' is a clear bug, the behavior for no matches might well
  have been thought to be the intended behavior by people who hadn't
  compared it carefully to the core regexp_matches() functions.  So we
  should tread carefully about introducing this change in the back
  branches.  Still, it clearly is a bug and so providing some fix is
  desirable.  After discussion, the conclusion was to introduce the
  change in a 1.1 version of the citext extension (as we would need to
  do anyway); 1.0 still contains the incorrect behavior.  1.1 is the
  default and only available version in HEAD, but it is optional in
  the back branches, where 1.0 remains the default version.  People
  wishing to adopt the fix in back branches will need to explicitly do
  ALTER EXTENSION citext UPDATE TO '1.1'.  (I also provided a
  downgrade script in the back branches, so people could go back to
  1.0 if necessary.) This should be called out as an incompatible
  change in the 9.5 release notes, although we'll also document it in
  the next set of back-branch release notes.  The notes should mention
  that any views or rules that use citext's regexp_matches() functions
  will need to be dropped before upgrading to 1.1, and then recreated
  again afterwards.  Back-patch to 9.1.  The bug goes all the way back
  to citext's introduction in 8.4, but pre-9.1 there is no extension
  mechanism with which to manage the change.  Given the lack of
  previous complaints it seems unnecessary to change this behavior in
  9.0, anyway.

- citext's regexp_matches() functions weren't documented, either.

- Add missing "static" marker.  Per buildfarm member pademelon.

- Code review for foreign/custom join pushdown patch.  Commit
  e7cb7ee14555cc9c5773e2c102efd6371f6f2005 included some design
  decisions that seem pretty questionable to me, and there was quite a
  lot of stuff not to like about the documentation and comments.
  Clean up as follows: * Consider foreign joins only between foreign
  tables on the same server, rather than between any two foreign
  tables with the same underlying FDW handler function.  In most if
  not all cases, the FDW would simply have had to apply the
  same-server restriction itself (far more expensively, both for lack
  of caching and because it would be repeated for each combination of
  input sub-joins), or else risk nasty bugs.  Anyone who's really
  intent on doing something outside this restriction can always use
  the set_join_pathlist_hook.  * Rename fdw_ps_tlist/custom_ps_tlist
  to fdw_scan_tlist/custom_scan_tlist to better reflect what they're
  for, and allow these custom scan tlists to be used even for base
  relations.  * Change make_foreignscan() API to include passing the
  fdw_scan_tlist value, since the FDW is required to set that.
  Backwards compatibility doesn't seem like an adequate reason to
  expect FDWs to set it in some ad-hoc extra step, and anyway existing
  FDWs can just pass NIL.  * Change the API of path-generating
  subroutines of add_paths_to_joinrel, and in particular that of
  GetForeignJoinPaths and set_join_pathlist_hook, so that various
  less-used parameters are passed in a struct rather than as separate
  parameter-list entries.  The objective here is to reduce the
  probability that future additions to those parameter lists will
  result in source-level API breaks for users of these hooks.  It's
  possible that this is even a small win for the core code, since most
  CPU architectures can't pass more than half a dozen parameters
  efficiently anyway.  I kept root, joinrel, outerrel, innerrel, and
  jointype as separate parameters to reduce code churn in joinpath.c
  --- in particular, putting jointype into the struct would have been
  problematic because of the subroutines' habit of changing their
  local copies of that variable.  * Avoid ad-hocery in
  ExecAssignScanProjectionInfo.  It was probably all right for it to
  know about IndexOnlyScan, but if the list is to grow we should
  refactor the knowledge out to the callers.  * Restore
  nodeForeignscan.c's previous use of the relcache to avoid extra
  GetFdwRoutine lookups for base-relation scans.  * Lots of cleanup of
  documentation and missed comments.  Re-order some code additions
  into more logical places.

Andrew Dunstan pushed:

- Fix two small bugs in json's populate_record_worker.  The first bug
  is not releasing a tupdesc when doing an early return out of the
  function. The second bug is a logic error in choosing when to do an
  early return if given an empty jsonb object.  Bug reports from Pavel
  Stehule and Tom Lane respectively.  Backpatch to 9.4 where these
  were introduced.

- Add new OID alias type regrole.  The new type has the scope of whole
  the database cluster so it doesn't behave the same as the existing
  OID alias types which have database scope, concerning object
  dependency. To avoid confusion constants of the new type are
  prohibited from appearing where dependencies are made involving it.
  Also, add a note to the docs about possible MVCC violation and
  optimization issues, which are general over the all reg* types.
  Kyotaro Horiguchi

- Add new OID alias type regnamespace.  Catalog version bumped.

Heikki Linnakangas pushed:

- Fix the same-rel optimization when creating WAL records.
  prev_regbuf was never set, and therefore the same-rel flag was never
  set on WAL records.  Report and fix by Zhanq Zq

- At promotion, archive last segment from old timeline with .partial
  suffix.  Previously, we would archive the possible-incomplete WAL
  segment with its normal filename, but that causes trouble if the
  server owning that timeline is still running, and tries to archive
  the same segment later. It's not nice for the standby to trip up the
  master's archival like that. And it's pretty confusing, anyway, to
  have an incomplete segment in the archive that's indistinguishable
  from a normal, complete segment.  To avoid such confusion, add a
  .partial suffix to the file. Or to be more precise, make a copy of
  the old segment under the .partial suffix, and archive that instead
  of the original file. pg_receivexlog also uses the .partial suffix
  for the same purpose, to tell apart incompletely streamed files from
  complete ones.  There is no automatic mechanism to use the .partial
  files at recovery, so they will go unused, unless the administrator
  manually copies to them to the pg_xlog directory (and removes the
  .partial suffix). Recovery won't normally need the WAL - when
  recovering to the new timeline, it will find the same WAL on the
  first segment on the new timeline instead - but it nevertheless
  feels better to archive the file with the .partial suffix, for
  debugging purposes if nothing else.

- Add macros to check if a filename is a WAL segment or other such
  file.  We had many instances of the strlen + strspn combination to
  check for that.  This makes the code a bit easier to read.

Robert Haas pushed:

- Use outerPlanState macro instead of referring to leffttree.  This
  makes the executor code more consistent.  It also removes an
  apparently superfluous NULL test in nodeGroup.c.  Qingqing Zhou,
  reviewed by Tom Lane, and further revised by me.

- Recursively fsync() the data directory after a crash.  Otherwise, if
  there's another crash, some writes from after the first crash might
  make it to disk while writes from before the crash fail to make it
  to disk.  This could lead to data corruption.  Back-patch to all
  supported versions.  Abhijit Menon-Sen, reviewed by Andres Freund
  and slightly revised by me.

- Fix some problems with patch to fsync the data directory.
  pg_win32_is_junction() was a typo for pgwin32_is_junction().  open()
  was used not only in a two-argument form, which breaks on Windows,
  but also where BasicOpenFile() should have been used.  Per reports
  from Andrew Dunstan and David Rowley.

- Avoid using a C++ keyword as a structure member name.  Per request
  from Peter Eisentraut.

- Fix incorrect math in DetermineSafeOldestOffset.  The old formula
  didn't have enough parentheses, so it would do the wrong thing, and
  it used / rather than % to find a remainder.  The effect of these
  oversights is that the stop point chosen by the logic introduced in
  commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c might be rather
  meaningless.  Thomas Munro, reviewed by Kevin Grittner, with a
  whitespace tweak by me.

- Teach autovacuum about multixact member wraparound.  The logic
  introduced in commit b69bf30b9bfacafc733a9ba77c9587cf54d06c0c and
  repaired in commits 669c7d20e6374850593cb430d332e11a3992bbcf and
  7be47c56af3d3013955c91c2877c08f2a0e3e6a2 helps to ensure that we
  don't overwrite old multixact member information while it is still
  needed, but a user who creates many large multixacts can still
  exhaust the member space (and thus start getting errors) while
  autovacuum stands idly by.  To fix this, progressively ramp down the
  effective value (but not the actual contents) of
  autovacuum_multixact_freeze_max_age as member space utilization
  increases.  This makes autovacuum more aggressive and also reduces
  the threshold for a manual VACUUM to perform a full-table scan.
  This patch leaves unsolved the problem of ensuring that emergency
  autovacuums are triggered even when autovacuum=off.  We'll need to
  fix that via a separate patch.  Thomas Munro and Robert Haas

Peter Eisentraut pushed:

- Fix typos.  Author: Erik Rijkers <er <at>>

- doc: Update installation instructions for new shared
  libperl/libpython handling

- hstore_plpython: Support tests on Python 2.3.  Python 2.3 does not
  have the sorted() function, so do it the long way.

- Fix whitespace

Álvaro Herrera pushed:

- Add geometry/range functions to support BRIN inclusion.  This commit
  adds the following functions: box(point) -> box, bound_box(box, box)
  -> box, inet_same_family(inet, inet) -> bool, inet_merge(inet, inet)
  -> cidr, range_merge(anyrange, anyrange) -> anyrange.  The first of
  these is also used to implement a new assignment cast from point to
  box.  These functions are the first part of a base to implement an
  "inclusion" operator class for BRIN, for multidimensional data
  types.  Author: Emre Hasegeli.  Reviewed by: Andreas Karlsson

- Improve BRIN infra, minmax opclass and regression test.  The minmax
  opclass was using the wrong support functions when cross-datatypes
  queries were run.  Instead of trying to fix the pg_amproc
  definitions (which apparently is not possible), use the already
  correct pg_amop entries instead.  This requires jumping through more
  hoops (read: extra syscache lookups) to obtain the underlying
  functions to execute, but it is necessary for correctness.  Author:
  Emre Hasegeli, tweaked by Álvaro Review: Andreas Karlsson Also
  change BrinOpcInfo to record each stored type's typecache entry
  instead of just the OID.  Turns out that the full type cache is
  necessary in brin_deform_tuple: the original code used the indexed
  type's byval and typlen properties to extract the stored tuple,
  which is correct in Minmax; but in other implementations that want
  to store something different, that's wrong.  The realization that
  this is a bug comes from Emre also, but I did not use his patch.  I
  also adopted Emre's regression test code (with smallish changes),
  which is more complete.

Magnus Hagander pushed:

- Fix indentation that could mask a future bug.  Michael Paquier,
  spotted using Coverity

- Fix minor resource leak in pg_dump.  Michael Paquier, spotted using

- Properly send SCM status updates when shutting down service on
  Windows.  The Service Control Manager should be notified regularly
  during a shutdown that takes a long time. Previously we would
  increaes the counter, but forgot to actually send the notification
  to the system. The loop counter was also incorrectly initalized in
  the event that the startup of the system took long enough for it to
  increase, which could cause the shutdown process not to wait as long
  as expected.  Krystian Bigaj, reviewed by Michael Paquier

Bruce Momjian pushed:

- Makefile:  Add comment that doc uninstall clears man directories.
  Report by Mario Valdez

- doc:  adjust ordering of pg_stat_statement paragraphs.  Clarify
  installation instructions.  Patch by Ian Barwick

Andres Freund pushed:

- Represent columns requiring insert and update privileges
  indentently.  Previously, relation range table entries used a single
  Bitmapset field representing which columns required either UPDATE or
  INSERT privileges, despite the fact that INSERT and UPDATE
  privileges are separately cataloged, and may be independently held.
  As statements so far required either insert or update privileges but
  never both, that was sufficient. The required permission could be
  inferred from the top level statement run.  The upcoming INSERT ...
  ON CONFLICT UPDATE feature needs to independently check for both
  privileges in one statement though, so that is not sufficient
  anymore.  Bumps catversion as stored rules change.  Author: Peter
  Geoghegan.  Reviewed-By: Andres Freund.

- Remove dependency on ordering in logical decoding upsert test.
  Buildfarm member magpie sorted the output differently than intended
  by Peter. "Resolve" the problem by simply not aggregating, it's not
  that many lines.

- Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.  The newly
  added ON CONFLICT clause allows to specify an alternative to raising
  a unique or exclusion constraint violation error when inserting.  ON
  CONFLICT refers to constraints that can either be specified using a
  inference clause (by specifying the columns of a unique constraint)
  or by naming a unique or exclusion constraint.  DO NOTHING avoids
  the constraint violation, without touching the pre-existing row.  DO
  UPDATE SET ... [WHERE ...] updates the pre-existing tuple, and has
  access to both the tuple proposed for insertion and the existing
  tuple; the optional WHERE clause can be used to prevent an update
  from being executed.  The UPDATE SET and WHERE clauses have access
  to the tuple proposed for insertion using the "magic" EXCLUDED
  alias, and to the pre-existing tuple using the table name or its
  alias.  This feature is often referred to as upsert.  This is
  implemented using a new infrastructure called "speculative
  insertion". It is an optimistic variant of regular insertion that
  first does a pre-check for existing tuples and then attempts an
  insert.  If a violating tuple was inserted concurrently, the
  speculatively inserted tuple is deleted and a new attempt is made.
  If the pre-check finds a matching tuple the alternative DO NOTHING
  or DO UPDATE action is taken.  If the insertion succeeds without
  detecting a conflict, the tuple is deemed inserted.  To handle the
  possible ambiguity between the excluded alias and a table named
  excluded, and for convenience with long relation names, INSERT INTO
  now can alias its target table.  Bumps catversion as stored rules
  change.  Author: Peter Geoghegan, with significant contributions
  from Heikki Linnakangas and Andres Freund. Testing infrastructure by
  Jeff Janes.  Reviewed-By: Heikki Linnakangas, Andres Freund, Robert
  Haas, Simon Riggs, Dean Rasheed, Stephen Frost and many others.

- Minor ON CONFLICT related comments and doc fixes.  Geoff Winkless,
  Stephen Frost, Peter Geoghegan and me.

- Fix two problems in infer_arbiter_indexes().  The first is a pretty
  simple bug where a relcache entry is used after the relation is
  closed. In this particular situation it does not appear to have bad
  consequences unless compiled with RELCACHE_FORCE_RELEASE.  The
  second is that infer_arbiter_indexes() skipped indexes that aren't
  yet valid according to indcheckxmin. That's not required here,
  because uniqueness checks don't care about visibility according to
  an older snapshot.  While thats not really a bug, it makes things
  undesirably non-deterministic.  There is some hope that this
  explains a test failure on buildfarm member jaguarundi.  Discussion:
  9096.1431102730 <at>

Stephen Frost pushed:

- Remove reference to src/tools/backend/index.html.  src/tools/backend
  was removed back in 63f1ccd, but backend/storage/lmgr/README didn't
  get the memo.  Author: Amit Langote

- Add pg_file_settings view and function.  The function and view added
  here provide a way to look at all settings in postgresql.conf, any
  #include'd files, and (which is what backs the
  ALTER SYSTEM command).  The information returned includes the
  configuration file name, line number in that file, sequence number
  indicating when the parameter is loaded (useful to see if it is
  later masked by another definition of the same parameter), parameter
  name, and what it is set to at that point.  This information is
  updated on reload of the server.  This is unfiltered, privileged,
  information and therefore access is restricted to superusers through
  the GRANT system.  Author: Sawada Masahiko, various improvements by
  me.  Reviewers: David Steele

- Bump catversion for pg_file_settings.  Pointed out by Andres
  (thanks!) Apologies for not including it in the initial patch.

- Modify pg_stat_get_activity to build a tuplestore.  This updates
  pg_stat_get_activity() to build a tuplestore for its results instead
  of using the old-style multiple-call method.  This simplifies the
  function, though that wasn't the primary motivation for the change,
  which is that we may turn it into a helper function which can filter
  the results (or not) much more easily.

- Change default for include_realm to 1.  The default behavior for GSS
  and SSPI authentication methods has long been to strip the realm off
  of the principal, however, this is not a secure approach in
  multi-realm environments and the use-case for the parameter at all
  has been superseded by the regex-based mapping support available in
  pg_ident.conf.  Change the default for include_realm to be '1',
  meaning that we do NOT remove the realm from the principal by
  default.  Any installations which depend on the existing behavior
  will need to update their configurations (ideally by leaving
  include_realm set to 1 and adding a mapping in pg_ident.conf, but
  alternatively by explicitly setting include_realm=0 prior to
  upgrading).  Note that the mapping capability exists in all
  currently supported versions of PostgreSQL and so this change can be
  done today.  Barring that, existing users can update their
  configurations today to explicitly set include_realm=0 to ensure
  that the prior behavior is maintained when they upgrade.  This needs
  to be noted in the release notes.  Per discussion with Magnus and

- Improve ParseConfigFp comment wrt head/tail.  The head_p and tail_p
  pointers passed to ParseConfigFp() are actually input/output
  parameters, not strictly output paramaters.  This updates the
  function comment to reflect that.  Per discussion with Tom.

- Correct reindexdb documentation.  --schema takes a schema, not a
  table.  Author: Sawada Masahiko

== Rejected Patches (for now) ==

No one was disappointed this week :-)

== Pending Patches ==

Michael Paquier sent in a patch to fix a potential pointer dereference
in plperl.c caused by transforms patch.

Petr Korobeinikov sent in a patch to add \ev and \sv to edit and show
the creation scripts for views, respectively, to psql.

Álvaro Herrera sent in two more revisions of a patch to add deparsing

Arjen Nienhuis sent in a patch to have GB18030 handle more than 2-byte
Unicode code points.

Fabien COELHO sent in two more revisions of a patch to extend pgbench
expressions with functions.

Emre Hasegeli and Álvaro Herrera traded patches to add a BRIN range
operator class.

Tom Lane sent in another revision of a patch to manipulate complex
types as non-contiguous structures in-memory.

Tomas Vondra sent in another revision of a patch to add multivariate

Fabien COELHO sent in a patch to pgbench to allow '=' in \set.

SAWADA Masahiko and Fabrízio de Royes Mello traded patches to add

Stephen Frost sent in two more revisions of a patch to add default
roles for sets of administrative functions.

Fabien COELHO sent in another revision of a patch to remove
nclients/nthreads constraint from pgbench.

Stas Kelvich sent in another revision of a patch to add  kNN support
for the cube extension.

Michael Paquier sent in a patch to redefine subxcnt as uint32 for
consistency with xcnt.

Michael Paquier sent in a patch to make more precise the rounding
behavior of numeric and double precision in docs.

Kaigai Kouhei sent in three more revisions of a patch to add a
custom/foreign join API.

Kaigai Kouhei sent in a patch to fix regtest policy for sepgsql.

Tom Lane sent in a patch to fix a bug in HOT.


Sent via pgsql-announce mailing list (pgsql-announce <at>
To make changes to your subscription: