Sandeep Thakkar | 1 Feb 15:11 2011
Picon

Re: Parallel SELECT retrieve only half of the results

I was also trying to setup parallel queries and got some information about the setup from pgpool doc. But I have some queries like:

Do I need to create table like "accounts, branches, history" ? and What are "aid, tid" in the function definition?

Thanks.

From: Alessandro Candini <candini-8EtKWW2T1Xo@public.gmane.org>
To: pgpool-ge neral-JL6EbXIHTPOxbKUeIHjxjQ@public.gmane.org
Sent: Mon, January 31, 2011 5:04:28 PM
Subject: [Pgpool-general] Parallel SELECT retrieve only half of the results

Hi, I have configured pgpool-II-3.0.1 with postgresql-9.0.2 for parallel queries:

listen_addresses = '*'
port = 9999
pcp_port = 9898
socket_dir = '/tmp'
pcp_socket_dir = '/tmp'
backend_socket_dir = '/tmp'
pcp_timeout = 10
num_init_children = 32
max_pool = 4
child_life_time = 0
connection_life_time = 0
child_max_connections = 0
client_idle_limit = 0
authentication_timeout = 60
logdir = '/var/log/pgpool'
pid_file_name = '/var/log/pgpool/pgpool.pid'
replication_mode = true
load_balance_mode = false
replication_stop_on_mismatch = false
failover_if_affected_tuples_mismatch = false
replicate_select = true
reset_query_list = 'ABORT; DISCARD ALL'
white_function_list = ''
black_function_list = 'nextval,setval'
print_time stamp = true
master_slave_mode = false
master_slave_sub_mode = 'stream'
delay_threshold = 100
log_standby_delay = 'if_over_threshold'
connection_cache = true
health_check_timeout = 10
health_check_period = 10
health_check_user = 'postgis'
failover_command = '/usr/lib/pgpool-II/3.0.1/bin/failover.sh %h %H
/tmp/trigger_file0'
failback_command = '/usr/lib/pgpool-II/3.0.1/bin/failback.sh %h %M
/tmp/trigger_file0 %m'
fail_over_on_backend_error = false
insert_lock = true
ignore_leading_white_space = false
log_statement = false
log_per_node_statement = false
log_connections = false
log_hostname = false
parallel_mode = true
enable_query_cache = false
pgpool2_hostname = ''
system_db_hostname = '192.168.0.128'
system_db_port = 5433
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''
backend_hostname0 = '192.168.0.128 '
backend_port0 = 5433
backend_weight0 = 1
backend_data_directory0 = '/home/database/9.0.2/data'
backend_hostname1 = '192.168.0.125'
backend_port1 = 5433
backend_weight1 = 1
backend_data_directory1 = '/home/database/9.0.2/data'
enable_pool_hba = false
recovery_user = 'postgis'
recovery_password = 'gispost'
recovery_1st_stage_command = 'basebackup.sh'
recovery_2nd_stage_command = ''
recovery_timeout = 60
client_idle_limit_in_recovery = 0
lobj_lock_table = 'pgpool_lobj_lock'
ssl = false
debug_level = 0

I have splitted the same db on the two machines 192.168.0.128 (the System DB)
and 192.168.0.125, but when I perform something like "SELECT COUNT(*) FROM
mytable" it retrieves only the data on the system DB...
I expected to obtain the sum of data on the two nodes.
Where am I wrong?

_______________________________________________
Pgpool-general mailing list
Pgpool-general-JL6EbXIHTPOxbKUeIHjxjQ@public.gmane.org
http://pgfoundry.org/mailman/listinfo/pgpool-general

_______________________________________________
Pgpool-general mailing list
Pgpool-general@...
http://pgfoundry.org/mailman/listinfo/pgpool-general
Alessandro Candini | 1 Feb 15:27 2011
Picon

Parallel SELECT now works but is very slow...but dblink is parallel?

Finally I was able to properly configure pgpool in order to perform a parallel
query.

But it is so slow that it seems that the query on my splitted database is
performed sequentially rather than in a parallel way.

I have on the same machine pgpool-II-3.0.1 and 4 istances of postgresql-9.0.2
(respectovely on port 5433, 5434, 5435, 5436).
Every istance has a different piece (no replication!) of the same database.

A test query that I prepared which retrieve a big amount of data, takes 6
seconds with pgpool,
but if I launch the same query with 4 different but
contemporary threads directly on database ports (using a python script),
it takes only 0.9 seconds per thread.
Ok, the results are not merged together, but what a difference!

I think that pgpool splits the queries through the instancies, but launch them
sequentially.
I guess "1 sec * db + result_merge_time" = 6 seconds more or less...

Is that possible and there is a way to fix it?
Is the dblink function launched in a parallel way (contemporarily) on the 4 db
instancies?

My configuration is the follows, thanks in advance...
listen_addresses = '*'
port = 9999
pcp_port = 9898
socket_dir = '/tmp'
pcp_socket_dir = '/tmp'
backend_socket_dir = '/tmp'
pcp_timeout = 10
num_init_children = 32
max_pool = 4
child_life_time = 0
connection_life_time = 0
child_max_connections = 0
client_idle_limit = 0
authentication_timeout = 60
logdir = '/var/log/pgpool'
pid_file_name = '/var/log/pgpool/pgpool.pid'
replication_mode = true
load_balance_mode = false
replication_stop_on_mismatch = false
failover_if_affected_tuples_mismatch = false
replicate_select = false
reset_query_list = 'ABORT; DISCARD ALL'
white_function_list = ''
black_function_list = 'nextval,setval'
print_timestamp = true
master_slave_mode = false
master_slave_sub_mode = 'stream'
delay_threshold = 100
log_standby_delay = 'if_over_threshold'
connection_cache = true
health_check_timeout = 10
health_check_period = 10
health_check_user = 'postgis'
failover_command = '/usr/lib/pgpool-II/3.0.1/bin/failover.sh %h %H
/tmp/trigger_file0'
failback_command = '/usr/lib/pgpool-II/3.0.1/bin/failback.sh %h %M
/tmp/trigger_file0 %m'
fail_over_on_backend_error = false
insert_lock = true
ignore_leading_white_space = false
log_statement = false
log_per_node_statement = false
log_connections = false
log_hostname = false
parallel_mode = true
enable_query_cache = false
pgpool2_hostname = ''
system_db_hostname = 'localhost'
system_db_port = 5433
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = 'gispost'
backend_hostname0 = 'localhost'
backend_port0 = 5433
backend_weight0 = 1
backend_data_directory0 = '/home/database/9.0.2/db_0/data'
backend_hostname1 = 'localhost'
backend_port1 = 5434
backend_weight1 = 1
backend_data_directory1 = '/home/database/9.0.2/db_1/data'
backend_hostname2 = 'localhost'
backend_port2 = 5435
backend_weight2 = 1
backend_data_directory2 = '/home/database/9.0.2/db_2/data'
backend_hostname3 = 'localhost'
backend_port3 = 5436
backend_weight3 = 1
backend_data_directory3 = '/home/database/9.0.2/db_3/data'
enable_pool_hba = true
recovery_user = 'postgis'
recovery_password = 'gispost'
recovery_1st_stage_command = 'basebackup.sh'
recovery_2nd_stage_command = ''
recovery_timeout = 60
client_idle_limit_in_recovery = 0
lobj_lock_table = 'pgpool_lobj_lock'
ssl = false
debug_level = 0
Wouter D'Haeseleer | 1 Feb 16:53 2011

pgpool-ha and pg9 Streaming replication issue

Hi all,

I have 2 servers running postgres9 with streaming replication.
On both nodes I have configured pgpool which are managed by pgpool-ha using heartbeat.

Now let's assume the following

Node 0 is primary DB and running pgpool2
Node 1 is standby DB

Now let's say I pull out the power for node0
Heartbeat detects that node down and will fail-over the Virtual-IP and the pgpool2 instance as expected.

However pgpool does not like to promote the slave to primary in this case.

The log will show the following:

2011-02-01 16:48:05 LOG:   pid 7837: pgpool-II successfully started. version 3.1.0-alpha1 (umiyameboshi)
2011-02-01 16:48:05 LOG:   pid 7837: find_primary_node: 1 node is standby
2011-02-01 16:48:05 LOG:   pid 7837: find_primary_node: no primary node found

Shouldn't pgpool2 promote the standby to primary as soon as he does not detect a primary node in the cluster.

I was thinking to run 2 instances at once and just used the Virtual-IP, however I'm not sure if this will work since these instances do not know about each other.

Thanks

Wouter



_______________________________________________
Pgpool-general mailing list
Pgpool-general@...
http://pgfoundry.org/mailman/listinfo/pgpool-general
Tatsuo Ishii | 2 Feb 00:35 2011
Picon

Re: pgpool-ha and pg9 Streaming replication issue

Maybe pgpool-ha is not yet ready for streaming replication.
Haruka?
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

> Hi all,
> 
> I have 2 servers running postgres9 with streaming replication.
> On both nodes I have configured pgpool which are managed by pgpool-ha
> using heartbeat.
> 
> Now let's assume the following 
> 
> Node 0 is primary DB and running pgpool2
> Node 1 is standby DB
> 
> Now let's say I pull out the power for node0
> Heartbeat detects that node down and will fail-over the Virtual-IP and
> the pgpool2 instance as expected.
> 
> However pgpool does not like to promote the slave to primary in this
> case.
> 
> The log will show the following:
> 
> 2011-02-01 16:48:05 LOG:   pid 7837: pgpool-II successfully started.
> version 3.1.0-alpha1 (umiyameboshi)
> 2011-02-01 16:48:05 LOG:   pid 7837: find_primary_node: 1 node is
> standby
> 2011-02-01 16:48:05 LOG:   pid 7837: find_primary_node: no primary node
> found
> 
> Shouldn't pgpool2 promote the standby to primary as soon as he does not
> detect a primary node in the cluster.
> 
> I was thinking to run 2 instances at once and just used the Virtual-IP,
> however I'm not sure if this will work since these instances do not know
> about each other.
> 
> Thanks 
> 
> Wouter
Sandeep Thakkar | 2 Feb 10:07 2011
Picon

Re: Parallel SELECT retrieve only half of the results

We have setup one master and one slave which was created by taking base backup. We did not setup the slave to be as standby, that means, this server also can run write queries. Then we followed the tutorial to setup Parallel Query. We enabled Load balancing and Parallel query mode and disabled rest of the modes. Did everything what the pgpool tutorial suggests. We did change one thing though. We modified dist_def_pgbench.sql because we had only 2 nodes. Here is the diff:

<     SELECT CASE WHEN $1 > 0 AND $1 <= 100000 THEN 0
<         WHEN $1 > 100000 AND $1 <= 200000 THEN 1
<         ELSE 2
---
>     SELECT CASE WHEN $1 > 0 AND $1 <= 150000 THEN 0
>         ELSE 1

Now, in the end when I execute "SELECT * FROM pgbench_accounts bench_parallel" using all 3 ports (2 dbservers and 1 pgpool), I found that the query using pgpool takes a longer time than the two ports of the dbservers. I guess it should infact take less time, right? Please let me if I have done anything wrong.

Thanks for your help.

From: Alessandro Candini <candini-8EtKWW2T1Xo@public.gmane.org>
To: pgpool-general <at> pgf oundry.org
Sent: Mon, January 31, 2011 5:04:28 PM
Subject: [Pgpool-general] Parallel SELECT retrieve only half of the results

Hi, I have configured pgpool-II-3.0.1 with postgresql-9.0.2 for parallel queries:

listen_addresses = '*'
port = 9999
pcp_port = 9898
socket_dir = '/tmp'
pcp_socket_dir = '/tmp'
backend_socket_dir = '/tmp'
pcp_timeout = 10
num_init_children = 32
max_pool = 4
child_life_time = 0
connection_life_time = 0
child_max_connections = 0
client_idle_limit = 0
authentication_timeout = 60
logdir = '/var/log/pgpool'
pid_file_name = '/var/log/pgpool/pgpool.pid'
replication_mode = true
load_balance_mode = false
replication_stop_on_mismatch = false
failover_if_affected_tuples_mismatch = false
replicate_select = true
reset_query_list = 'ABORT; DISCARD ALL'
white_function_list = ''
black_function_list = 'nextval,setval'
print_time stamp = true
master_slave_mode = false
master_slave_sub_mode = 'stream'
delay_threshold = 100
log_standby_delay = 'if_over_threshold'
connection_cache = true
health_check_timeout = 10
health_check_period = 10
health_check_user = 'postgis'
failover_command = '/usr/lib/pgpool-II/3.0.1/bin/failover.sh %h %H
/tmp/trigger_file0'
failback_command = '/usr/lib/pgpool-II/3.0.1/bin/failback.sh %h %M
/tmp/trigger_file0 %m'
fail_over_on_backend_error = false
insert_lock = true
ignore_leading_white_space = false
log_statement = false
log_per_node_statement = false
log_connections = false
log_hostname = false
parallel_mode = true
enable_query_cache = false
pgpool2_hostname = ''
system_db_hostname = '192.168.0.128'
system_db_port = 5433
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''
backend_hostname0 = '192.168.0.128 '
backend_port0 = 5433
backend_weight0 = 1
backend_data_directory0 = '/home/database/9.0.2/data'
backend_hostname1 = '192.168.0.125'
backend_port1 = 5433
backend_weight1 = 1
backend_data_directory1 = '/home/database/9.0.2/data'
enable_pool_hba = false
recovery_user = 'postgis'
recovery_password = 'gispost'
recovery_1st_stage_command = 'basebackup.sh'
recovery_2nd_stage_command = ''
recovery_timeout = 60
client_idle_limit_in_recovery = 0
lobj_lock_table = 'pgpool_lobj_lock'
ssl = false
debug_level = 0

I have splitted the same db on the two machines 192.168.0.128 (the System DB)
and 192.168.0.125, but when I perform something like "SELECT COUNT(*) FROM
mytable" it retrieves only the data on the system DB...
I expected to obtain the sum of data on the two nodes.
Where am I wrong?

_______________________________________________
Pgpool-general mailing list
Pgpool-general-JL6EbXIHTPOxbKUeIHjxjQ@public.gmane.org
http://pgfoundry.org/mailman/listinfo/pgpool-general

_______________________________________________
Pgpool-general mailing list
Pgpool-general@...
http://pgfoundry.org/mailman/listinfo/pgpool-general
TAKATSUKA Haruka | 3 Feb 05:03 2011
Picon

Re: pgpool-ha and pg9 Streaming replication issue


pgpool-ha has no special features for HS/SR, but it will work.

> > However pgpool does not like to promote the slave to primary in this
> > case.

Does it mean that pgpool does not start on Node1 ?
Then it is a problem heartbeat config or pgpool-ha. check heartbeat log.

______________________________________________________________________
 harukat@...  SRA OSS, Inc  http://www.sraoss.co.jp

On Wed, 02 Feb 2011 08:35:35 +0900 (JST)
Tatsuo Ishii <ishii@...> wrote:

> Maybe pgpool-ha is not yet ready for streaming replication.
> Haruka?
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese: http://www.sraoss.co.jp
> 
> > Hi all,
> > 
> > I have 2 servers running postgres9 with streaming replication.
> > On both nodes I have configured pgpool which are managed by pgpool-ha
> > using heartbeat.
> > 
> > Now let's assume the following 
> > 
> > Node 0 is primary DB and running pgpool2
> > Node 1 is standby DB
> > 
> > Now let's say I pull out the power for node0
> > Heartbeat detects that node down and will fail-over the Virtual-IP and
> > the pgpool2 instance as expected.
> > 
> > However pgpool does not like to promote the slave to primary in this
> > case.
> > 
> > The log will show the following:
> > 
> > 2011-02-01 16:48:05 LOG:   pid 7837: pgpool-II successfully started.
> > version 3.1.0-alpha1 (umiyameboshi)
> > 2011-02-01 16:48:05 LOG:   pid 7837: find_primary_node: 1 node is
> > standby
> > 2011-02-01 16:48:05 LOG:   pid 7837: find_primary_node: no primary node
> > found
> > 
> > Shouldn't pgpool2 promote the standby to primary as soon as he does not
> > detect a primary node in the cluster.
> > 
> > I was thinking to run 2 instances at once and just used the Virtual-IP,
> > however I'm not sure if this will work since these instances do not know
> > about each other.
> > 
> > Thanks 
> > 
> > Wouter
Bharath Keshav | 3 Feb 10:02 2011
Picon

stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf"

I am getting the following error when I try to run Pgpool. I am trying to use postgresql 9.0 hot streaming and then load balancing capability of Pgpool. However, I am stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf", as attached below. Please let me know what could be wrong.



2011-02-03 08:53:20 DEBUG: pid 12138: key: listen_addresses
2011-02-03 08:53:20 DEBUG: pid 12138: value: '*' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 9999 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 9898 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/postgresql' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/tmp' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: backend_socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/postgresql' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: num_init_children
2011-02-03 08:53:20 DEBUG: pid 12138: value: 32 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: max_pool
2011-02-03 08:53:20 DEBUG: pid 12138: value: 4 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: child_life_time
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: connection_life_time
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: child_max_connections
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: client_idle_limit
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: authentication_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 60 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: logdir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/log/pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pid_file_name
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/pgpool/pgpool.pid' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: replication_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: load_balance_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: replication_stop_on_mismatch
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: replication_stop_on_mismatch: 0
2011-02-03 08:53:20 DEBUG: pid 12138: key: failover_if_affected_tuples_mismatch
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: failover_if_affected_tuples_mismatch: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: replicate_select
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: replicate_select: 0
2011-02-03 08:53:20 DEBUG: pid 12138: key: reset_query_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'ABORT;DISCARD ALL' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: ABORT
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: DISCARD ALL
2011-02-03 08:53:20 DEBUG: pid 12138: key: white_function_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: black_function_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'nextval,setval,foo' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: nextval
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: setval
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: foo
2011-02-03 08:53:20 DEBUG: pid 12138: key: print_timestamp
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: master_slave_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: master_slave_sub_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'stream' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: delay_threshold
2011-02-03 08:53:20 DEBUG: pid 12138: value: 100 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_standby_delay
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'if_over_threshold' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: connection_cache
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_period
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'www-data' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: failover_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/failover.sh %d "%h" %p %D %m %M "%H" %P' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: failback_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/bin/rm -f /tmp/trigger_file1' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: fail_over_on_backend_error
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: insert_lock
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: ignore_leading_white_space
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_statement
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_per_node_statement
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_connections
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: parallel_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: enable_query_cache
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: pgpool2_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: '67.23.26.182' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: '67.23.26.182' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 5432 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_dbname
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_schema
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool_catalog' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_password
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: enable_pool_hba
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: pool_passwd
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pool_passwd' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'postgres' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_password
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpoolAdmin' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_1st_stage_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'basebackup.sh' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_2nd_stage_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 60 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: client_idle_limit_in_recovery
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: lobj_lock_table
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool_lobj_lock' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl_key
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/server.key' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl_cert
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/server.crt' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: debug_level
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: loading "/usr/local/etc/pool_hba.conf" for client authentication configuration file
2011-02-03 08:53:20 DEBUG: pid 12138: shmem_exit(0)


thanks, 
Bharath
_______________________________________________
Pgpool-general mailing list
Pgpool-general@...
http://pgfoundry.org/mailman/listinfo/pgpool-general
Sandeep Thakkar | 3 Feb 10:10 2011
Picon

Re: stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf"


That is the correct output. I think, pgpool must be running fine. Did you check your processes after this? What options do you give while starting pgpool? Use "pgpool -d -D -f pgpool.conf".

From: Bharath Keshav <bharath.keshav-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: pgpool-general-JL6EbXIHTPOxbKUeIHjxjQ@public.gmane.org
Sent: Thu, February 3, 2011 2:32:21 PM
Subject: [Pgpool-general] stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf"

I am getting the following error when I try to run Pgpool. I am trying to use postgresql 9.0 hot streaming and then load balancing capability of Pgpool. However, I am stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf", as attached below. Please let me know what could be wrong.


2011-02-03 08:53:20 DEBUG: pid 12138: key: listen_addresses
2011-02-03 08:53:20 DEBUG: pid 12138: value: '*' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 9999 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 9898 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/postgresql' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/tmp' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: backend_socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/postgresql' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: num_init_children
2011-02-03 08:53:20 DEBUG: pid 12138: value: 32 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: max_pool
2011-02-03 08:53:20 DEBUG: pid 12138: value: 4 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: child_life_time
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: connection_life_time
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: child_max_connections
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: client_idle_limit
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: authentication_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 60 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: logdir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/log/pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pid_file_name
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/pgpool/pgpool.pid' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: replication_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: load_balance_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: replication_stop_on_mismatch
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: replication_stop_on_mismatch: 0
2011-02-03 08:53:20 DEBUG: pid 12138: key: failover_if_affected_tuples_mismatch
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: failover_if_affected_tuples_mismatch: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: replicate_select
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: replicate_select: 0
2011-02-03 08:53:20 DEBUG: pid 12138: key: reset_query_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'ABORT;DISCARD ALL' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: ABORT
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: DISCARD ALL
2011-02-03 08:53:20 DEBUG: pid 12138: key: white_function_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: black_function_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'nextval,setval,foo' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: nextval
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: setval
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: foo
2011-02-03 08:53:20 DEBUG: pid 12138: key: print_timestamp
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: master_slave_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: master_slave_sub_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'stream' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: delay_threshold
2011-02-03 08:53:20 DEBUG: pid 12138: value: 100 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_standby_delay
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'if_over_threshold' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: connection_cache
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_period
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'www-data' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: failover_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/failover.sh %d "%h" %p %D %m %M "%H" %P' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: failback_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/bin/rm -f /tmp/trigger_file1' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: fail_over_on_backend_error
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: insert_lock
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: ignore_leading_white_space
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_statement
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_per_node_statement
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_connections
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: parallel_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: enable_query_cache
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: pgpool2_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: '67.23.26.182' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: '67.23.26.182' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 5432 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_dbname
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_schema
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool_catalog' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_password
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: enable_pool_hba
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: pool_passwd
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pool_passwd' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'postgres' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_password
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpoolAdmin' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_1st_stage_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'basebackup.sh' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_2nd_stage_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 60 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: client_idle_limit_in_recovery
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: lobj_lock_table
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool_lobj_lock' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl_key
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/server.key' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl_cert
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/server.crt' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: debug_level
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: loading "/usr/local/etc/pool_hba.conf" for client authentication configuration file
2011-02-03 08:53:20 DEBUG: pid 12138: shmem_exit(0)


thanks, 
Bharath

_______________________________________________
Pgpool-general mailing list
Pgpool-general@...
http://pgfoundry.org/mailman/listinfo/pgpool-general
Bharath Keshav | 3 Feb 10:51 2011
Picon

Re: stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf"

You are right, it's running. However, I am not able to connect using the cmmands psql -p 9999 -U postgres. I get the following error:


psql: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.


Also, through the web interface which is working, I don't get anything when I click the "node info" button..which means it's empty

The following is my configuration:


#
# pgpool-II configuration file sample
# $Header: /cvsroot/pgpool/pgpool-web/contrib_docs/simple_sr_setting/pgpool.conf,v 1.1 2010/11/04 04:39:57 t-ishii Exp $

# Host name or IP address to listen on: '*' for all, '' for no TCP/IP
# connections
listen_addresses = '*'

# Port number for pgpool
port = 9999

# Port number for pgpool communication manager
pcp_port = 9898

# Unix domain socket path.  (The Debian package defaults to
# /var/run/postgresql.)
socket_dir = '/var/run/postgresql'

# Unix domain socket path for pgpool communication manager.
# (Debian package defaults to /var/run/postgresql)
pcp_socket_dir = '/tmp'

# Unix domain socket path for the backend. Debian package defaults to /var/run/postgresql!
backend_socket_dir = '/var/run/postgresql'

# pgpool communication manager timeout. 0 means no timeout, but strongly not recommended!
pcp_timeout = 10

# number of pre-forked child process
num_init_children = 32

# Number of connection pools allowed for a child process
max_pool = 4

# If idle for this many seconds, child exits.  0 means no timeout.
child_life_time = 0

# If idle for this many seconds, connection to PostgreSQL closes.
# 0 means no timeout.
connection_life_time = 0

# If child_max_connections connections were received, child exits.
# 0 means no exit.
child_max_connections = 0

# If client_idle_limit is n (n > 0), the client is forced to be
# disconnected whenever after n seconds idle (even inside an explicit
# transactions!)
# 0 means no disconnect.
client_idle_limit = 0

# Maximum time in seconds to complete client authentication.
# 0 means no timeout.
authentication_timeout = 60

# Logging directory
logdir = '/var/log/pgpool'

# pid file name
pid_file_name = '/var/run/pgpool/pgpool.pid'

# Replication mode
replication_mode = true

# Load balancing mode, i.e., all SELECTs are load balanced.
# This is ignored if replication_mode is false.
load_balance_mode = true

# If there's a disagreement with the packet kind sent from backend,
# then degenrate the node which is most likely "minority".  If false,
# just force to exit this session.
replication_stop_on_mismatch = false

# If there's a disagreement with the number of affected tuples in
# UPDATE/DELETE, then degenrate the node which is most likely
# "minority".
# If false, just abort the transaction to keep the consistency.
failover_if_affected_tuples_mismatch = true

# If true, replicate SELECT statement when load balancing is disabled.
# If false, it is only sent to the master node.
replicate_select = false

# Semicolon separated list of queries to be issued at the end of a session
reset_query_list = 'ABORT;DISCARD ALL'

# white_function_list is a comma separated list of function names
# those do not write to database. Any functions not listed here
# are regarded to write to database and SELECTs including such 
# writer-functions will be executed on master(primary) in master/slave
# mode, or executed on all DB nodes in replication mode.
#
# black_function_list is a comma separated list of function names
# those write to database. Any functions not listed here
# are regarded not to write to database and SELECTs including such 
# read-only-functions will be executed on any DB nodes.
#
# You cannot make full both white_function_list and
# black_function_list at the same time. If you specify something in
# one of them, you should make empty other.
#
# Pre 3.0 pgpool-II recognizes nextval and setval in hard coded
# way. Following setting will do the same as the previous version.
# white_function_list = ''
# black_function_list = 'nextval,setval'
white_function_list = ''
#black_function_list = ''
black_function_list = 'nextval,setval,foo'

# If true print timestamp on each log line.
print_timestamp = true

# If true, operate in master/slave mode.
master_slave_mode = false

# Master/slave sub mode. either 'slony' or 'stream'. Default is 'slony'.
# master_slave_sub_mode = 'stream'
master_slave_sub_mode = 'stream'

# If the standby server delays more than delay_threshold,
# any query goes to the primary only. The unit is in bytes.
# 0 disables the check. Default is 0.
# Note that health_check_period required to be greater than 0
# to enable the functionality.
delay_threshold = 100

# 'always' logs the standby delay whenever health check runs.
# 'if_over_threshold' logs only if the delay exceeds delay_threshold.
# 'none' disables the delay log.
log_standby_delay = 'if_over_threshold'
#log_standby_delay = 'always'

# If true, cache connection pool.
connection_cache = true

# Health check timeout.  0 means no timeout.
health_check_timeout = 10

# Health check period.  0 means no health check.
health_check_period = 10

# Health check user
health_check_user = 'www-data'

# Execute command by failover.
# special values:  %d = node id
#                  %h = host name
#                  %p = port number
#                  %D = database cluster path
#                  %m = new master node id
#                  %M = old master node id
#                  %H = new master node host name
#                  %P = old primary node id
#                  %% = '%' character
#
failover_command = '/usr/local/etc/failover.sh %d "%h" %p %D %m %M "%H" %P'

# Execute command by failback.
# special values:  %d = node id
#                  %h = host name
#                  %p = port number
#                  %D = database cluster path
#                  %m = new master node id
#                  %M = old master node id
#                  %% = '%' character
#
failback_command = '/bin/rm -f /tmp/trigger_file1'

# If true, trigger fail over when writing to the backend communication
# socket fails. This is the same behavior of pgpool-II 2.2.x or
# earlier. If set to false, pgpool will report an error and disconnect
# the session.
fail_over_on_backend_error = false

# If true, automatically lock table with INSERT statements to keep SERIAL
# data consistency.  An /*INSERT LOCK*/ comment has the same effect.  A
# /NO INSERT LOCK*/ comment disables the effect.
insert_lock = true

# If true, ignore leading white spaces of each query while pgpool judges
# whether the query is a SELECT so that it can be load balanced.  This
# is useful for certain APIs such as DBI/DBD which is known to adding an
# extra leading white space.
ignore_leading_white_space = false

# If true, print all statements to the log.  Like the log_statement option
# to PostgreSQL, this allows for observing queries without engaging in full
# debugging.
log_statement = true

# If true, print all statements to the log. Similar to log_statement except
# that prints DB node id and backend process id info.
log_per_node_statement = true

# If true, incoming connections will be printed to the log.
log_connections = false

# If true, hostname will be shown in ps status. Also shown in
# connection log if log_connections = true.
# Be warned that this feature will add overhead to look up hostname.
log_hostname = false

# if non 0, run in parallel query mode
parallel_mode = false

# if non 0, use query cache
enable_query_cache = false

#set pgpool2 hostname 
pgpool2_hostname = 'localhost'

# system DB info
system_db_hostname = 'localhost'
system_db_port = 5432
system_db_dbname = 'pgpool'
system_db_schema = 'pgpool_catalog'
system_db_user = 'pgpool'
system_db_password = ''

# backend_hostname, backend_port, backend_weight
# here are examples


# - HBA -

# If true, use pool_hba.conf for client authentication. In pgpool-II
# 1.1, the default value is false. The default value will be true in
# 1.2.
enable_pool_hba = false

# md5 authentication file name. '' disables md5 authentication.
# To enable md5 auth, enable_pool_hba to true.
# Default is 'pool_passwd'.
pool_passwd = not_revealed

# - online recovery -
# online recovery user
recovery_user = 'postgres'

# online recovery password
recovery_password = not_revealed

# execute a command in first stage.
recovery_1st_stage_command = 'basebackup.sh'

# execute a command in second stage.
recovery_2nd_stage_command = ''

# maximum time in seconds to wait for remote start-up. 0 means no wait
recovery_timeout = 60

# If client_idle_limit_in_recovery is n (n > 0), the client is forced
# to be disconnected whenever after n seconds idle (even inside an
# explicit transactions!)  0 means no disconnect. This parameter only
# takes effect in recovery 2nd stage.
client_idle_limit_in_recovery = 0

# Specify table name to lock. This is used when rewriting lo_creat
# command in replication mode. The table must exist and has writable
# permission to public. If the table name is '', no rewriting occurs.
lobj_lock_table = 'pgpool_lobj_lock'

# If true, enable SSL support for both frontend and backend connections.
# note that you must also set ssl_key and ssl_cert for SSL to work in
# the frontend connections.
ssl = true
# path to the SSL private key file
ssl_key = '/usr/local/etc/server.key'
# path to the SSL public certificate file
ssl_cert = '/usr/local/etc/server.crt'

# Debug message verbosity level. 0: no message, 1 <= : more verbose
debug_level = 0

replication_timeout = 5000
log_statement = true
ssl_ca_cert = ''
ssl_ca_cert_dir = ''
backend_hostname0 = '67.23.26.182'
backend_port0 = 5432
backend_weight0 = 1
backend_data_directory0 = '/var/lib/postgresql/9.0/main/'
backend_hostname1 = '67.23.29.53'
backend_port1 = 5432
backend_weight1 = 1
backend_data_directory1 = '/var/lib/postgresql/9.0/main/'
backend_hostname2 = '67.23.27.95'
backend_port2 = 5432
backend_weight2 = 1
backend_data_directory2 = '/var/lib/postgresql/9.0/main/'


On Thu, Feb 3, 2011 at 2:40 PM, Sandeep Thakkar <sandeeptt-/E1597aS9LQAvxtiuMwx3w@public.gmane.org> wrote:

That is the correct output. I think, pgpool must be running fine. Did you check your processes after this? What options do you give while starting pgpool? Use "pgpool -d -D -f pgpool.conf".

From: Bharath Keshav <bharath.keshav-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
To: pgpool-general-JL6EbXIHTPOxbKUeIHjxjQ@public.gmane.org
Sent: Thu, February 3, 2011 2:32:21 PM
Subject: [Pgpool-general] stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf"

I am getting the following error when I try to run Pgpool. I am trying to use postgresql 9.0 hot streaming and then load balancing capability of Pgpool. However, I am stuck with a shmem_exit after the loading of "/usr/local/etc/pool_hba.conf", as attached below. Please let me know what could be wrong.


2011-02-03 08:53:20 DEBUG: pid 12138: key: listen_addresses
2011-02-03 08:53:20 DEBUG: pid 12138: value: '*' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 9999 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 9898 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/postgresql' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/tmp' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: backend_socket_dir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/postgresql' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pcp_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: num_init_children
2011-02-03 08:53:20 DEBUG: pid 12138: value: 32 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: max_pool
2011-02-03 08:53:20 DEBUG: pid 12138: value: 4 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: child_life_time
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: connection_life_time
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: child_max_connections
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: client_idle_limit
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: authentication_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 60 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: logdir
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/log/pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: pid_file_name
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/var/run/pgpool/pgpool.pid' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: replication_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: load_balance_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: replication_stop_on_mismatch
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: replication_stop_on_mismatch: 0
2011-02-03 08:53:20 DEBUG: pid 12138: key: failover_if_affected_tuples_mismatch
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: failover_if_affected_tuples_mismatch: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: replicate_select
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: replicate_select: 0
2011-02-03 08:53:20 DEBUG: pid 12138: key: reset_query_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'ABORT;DISCARD ALL' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: ABORT
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: DISCARD ALL
2011-02-03 08:53:20 DEBUG: pid 12138: key: white_function_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: black_function_list
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'nextval,setval,foo' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: nextval
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: setval
2011-02-03 08:53:20 DEBUG: pid 12138: extract_string_tokens: token: foo
2011-02-03 08:53:20 DEBUG: pid 12138: key: print_timestamp
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: master_slave_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: master_slave_sub_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'stream' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: delay_threshold
2011-02-03 08:53:20 DEBUG: pid 12138: value: 100 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_standby_delay
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'if_over_threshold' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: connection_cache
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_period
2011-02-03 08:53:20 DEBUG: pid 12138: value: 10 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: health_check_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'www-data' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: failover_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/failover.sh %d "%h" %p %D %m %M "%H" %P' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: failback_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/bin/rm -f /tmp/trigger_file1' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: fail_over_on_backend_error
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: insert_lock
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: ignore_leading_white_space
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_statement
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_per_node_statement
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_connections
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: log_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: parallel_mode
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: enable_query_cache
2011-02-03 08:53:20 DEBUG: pid 12138: value: false kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: pgpool2_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: '67.23.26.182' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_hostname
2011-02-03 08:53:20 DEBUG: pid 12138: value: '67.23.26.182' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_port
2011-02-03 08:53:20 DEBUG: pid 12138: value: 5432 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_dbname
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_schema
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool_catalog' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: system_db_password
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: enable_pool_hba
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: pool_passwd
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pool_passwd' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_user
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'postgres' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_password
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpoolAdmin' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_1st_stage_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'basebackup.sh' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_2nd_stage_command
2011-02-03 08:53:20 DEBUG: pid 12138: value: '' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: recovery_timeout
2011-02-03 08:53:20 DEBUG: pid 12138: value: 60 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: client_idle_limit_in_recovery
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: key: lobj_lock_table
2011-02-03 08:53:20 DEBUG: pid 12138: value: 'pgpool_lobj_lock' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl
2011-02-03 08:53:20 DEBUG: pid 12138: value: true kind: 1
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl_key
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/server.key' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: ssl_cert
2011-02-03 08:53:20 DEBUG: pid 12138: value: '/usr/local/etc/server.crt' kind: 4
2011-02-03 08:53:20 DEBUG: pid 12138: key: debug_level
2011-02-03 08:53:20 DEBUG: pid 12138: value: 0 kind: 2
2011-02-03 08:53:20 DEBUG: pid 12138: loading "/usr/local/etc/pool_hba.conf" for client authentication configuration file
2011-02-03 08:53:20 DEBUG: pid 12138: shmem_exit(0)


thanks, 
Bharath


_______________________________________________
Pgpool-general mailing list
Pgpool-general@...
http://pgfoundry.org/mailman/listinfo/pgpool-general
Picon

md5 auth + SSL ??

Hi all,
Have pgpool-II 3.0.1 in replication mode up and running, accessing two instances of pg 9.0.1 on the same 
machine.
So far so good, all looks fine and I’m almost happy except I'm unable to achieve authentication the way I
want 
to :-(

What I want:
- all connections from the same machine should be trusted.
- all connections from different hosts 
should only be possible via SSL

What I did:
-> pgpool.conf (beside all other entries for replication, ports etc.
   - 
ssl = true, ssl_key + ssl_cert point to the correct ssl files 
   - enable_pool_hba = true
   - pool_password build 
with pg_md5 --md5aut

pool_hba.conf
===========
local     all   postgres                        trust
hostssl all    
all            0.0.0.0/0       md5

pg_hba.conf
=========
# TYPE      DATABASE      USER        CIDR-ADDRESS          
METHOD
local         all                   postgres                                  trust
hostssl      
all                  postgres    127.0.0.1/0             md5

Result:
- connecting locally: -> md5 authentication is 
unsupported in replication,
- connecting from different hosts: -> able to connect with AND WITHOUT ??? SSL by supplying 
the password

Then I changed pg_hba.conf the following:

pg_hba.conf
=========
# TYPE      DATABASE        USER        
CIDR-ADDRESS        METHOD
local         all                     postgres                                trust

host         all                     postgres    127.0.0.1/0           trust
hostssl     all                     
postgres    127.0.0.1/0            md5

Result:
- connection local: -> ok, connect without pw (trust) possible
- 
connecting from different host: -> able to connect with AND WITHOUT SSL ??? and with and WITHOUT supplying a
password 
??? :-(

Question:
- is it possible to configure what I want ?
- How ?

Any help is highly appreciated.
TIA

acki4711

_______________________________________________
Pgpool-general mailing list
Pgpool-general <at> pgfoundry.org
http://pgfoundry.org/mailman/listinfo/pgpool-general

Gmane