Jay Reddy | 1 Dec 03:19 2015


Lu, Boying | 30 Nov 11:13 2015

Questions to StorageServiceMBean.forceRepaireRangeAsync()

Hi, All,


We plan to upgrade Cassandra from 2.0.17 to the latest release 2.2.3 in our product.


We use:


     * Same as forceRepairAsync, but handles a specified range


public int forceRepairRangeAsync(String beginToken, String endToken, final String keyspaceName, boolean isSequential, boolean isLocal, final String... columnFamilies);

(defined in StorageServiceMBean.java) to trigger a repair in Cassandra 2.0.17


But this interface is marked as “ <at> Deprecated” in 2.2.3 and has following prototype:

<at> Deprecated

    public int forceRepairRangeAsync(String beginToken, String endToken, String keyspaceName, boolean isSequential, boolean isLocal, boolean repairedAt, String... columnFamilies);


So my questions are:

1.       If we continue to use this interface, should we set the ‘repairedAt’ to true or false?

2.       If we don’t use this interface, which alternative API should we use?






Anuj Wadehra | 29 Nov 18:10 2015

Re: Repair Hangs while requesting Merkle Trees

Yes. I think you are correct, problem might have resolved via Cassandra restart rather than increasing
request timeout.

We are NOT on EC2. We have 2 interfaces on each node: one private and one public.
We have strange configuration and we need to correct it as per the recommendation at

AS-IS config:
We use broadcast address=listen address=PUBLIC IP address. 
In seeds, we put PUBLIC IP of other nodes but private IP for the local node. There were some issues if we tried
to access local node via its public IP.


On Tue, 24/11/15, Paulo Motta <pauloricardomg <at> gmail.com> wrote:

 Subject: Re: Repair Hangs while requesting Merkle Trees
 To: "user <at> cassandra.apache.org" <user <at> cassandra.apache.org>, "Anuj Wadehra" <anujw_2003 <at> yahoo.co.in>
 Date: Tuesday, 24 November, 2015, 12:38 AM

 The issue might be related to the
 ESTABLISHED connections just in one end. I don't think
 it might be related to inter_dc_tcp_nodelay or
 request_timeout_in_ms options. Did you restart the process
 when you changed the request_timeout_in_ms option? This
 might be why the problem got fixed and not the option

 This seem
 like a network issue or a misconfiguration of this specific
 node. Are you using EC2? Is listen_address ==
 broadcast_address? Are all nodes using the same
 configuration? What java are you using?

 You may want to enable TRACE on
 OutgoingTcpConnection and IncomingTcpConnection and compare
 the outputs of healthy nodes with the faulty node.

 2015-11-23 10:04 GMT-08:00
 Anuj Wadehra <anujw_2003 <at> yahoo.co.in>:
 comments on ESTABLISHED connections at one end?


 Moreover, inter_dc_tcp_nodelay is false. Can this be the
 reason that  latency between two DC is more and repair
 messages are getting dropped?


 Can increasing request_timeout_in_ms deal with the latency


 I see some hinted handoffs being triggered for cross DC
 nodes..and hints replay being timed-out..Is that an
 indication of a network issue?


 I am getting in tough with network team to capture netstats
 and tcpdump too..







 On Wed, 18/11/15, Anuj Wadehra
 <anujw_2003 <at> yahoo.co.in>


  Subject: Re: Repair Hangs while requesting Merkle Trees

  To: "user <at> cassandra.apache.org"
 <user <at> cassandra.apache.org>

  Date: Wednesday, 18 November, 2015, 7:57 AM


  Thanks Bryan !!


  is in ESTBLISHED state on on end and completely missing

  other end (in another dc).


  we can revisit TCP tuning.But the problem is node

  So not sure whether tuning is the culprit.




  from Yahoo Mail on Android  From:"Bryan

  Cheng" <bryan <at> blockcypher.com>

  Date:Wed, 18 Nov, 2015 at

   2:04 am

  Subject:Re: Repair Hangs

  while requesting Merkle Trees


   Ah OK, might

  have misunderstood you. Streaming socket should not be

  play during merkle tree generation (validation

  They may come in play during merkle tree exchange- that

  I'm not sure about. You can read a bit more here: https://issues.apache.org/jira/browse/CASSANDRA-8611.

  Regardless, you should have it set-

  1 hr is usually a good conservative estimate, but you can

  much lower safely.

  What state is the connection on that

  only shows on one side? Is it ESTABLISHED, or something



  a good place to start for tuning, though it doesn't

  as much about network tuning: https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html.

  More generally, TCP tuning usually revolves around a

  between latency and bandwidth. Over long connections

  (we're talking 10s of ms, instead of the sub 1ms

  usually see in a good dc network), your expectations

  shift greatly. Stuff like NODELAY on tcp is very nice

  cutting your latencies when you're inside a DC, but

  generate lots of small packets that will hurt your

  over longer connections due to the need to wait for

  otc_coalescing_strategy is on a similar vein, bundling

  together nearby messages to trade latency for

  You'll also probably want to tune your tcp buffers

  window sizes, since that determines how much data can

  in-flight between acknowledgements, and the default size

  pitiful for any decent  network size. Google

   around for TCP tuning/buffer tuning and you should

  some good resources.

  On Mon, Nov 16, 2015 at

  5:23 PM, Anuj Wadehra <anujw_2003 <at> yahoo.co.in>

  Hi Bryan,

  Thanks for the reply !!I

  didnt mean streaming_socket_tomeout_in_ms. I meant when

  run netstats (Linux cmnd) on  node A in DC1, you will

  notice that there is connection in established state

  node B in DC2. But when you run netstats on node B, you

   find any connection with node A. Such connections are

  across dc? Is it a problem.

  We havent set

  streaming_socket_timeout_in_ms which I know must be set.

  I am not  sure wtheher setting this property has any

  on merkle tree requests. I thought its valid for data

  streaming if some mismatch is

   found and data needs to be streamed.Please confirm.

  the value you use for streaming socket


  Morever, if

  socket timeout is the issue, that should happen on

  nodes too...repair is not running on just one node, as

  merkle tree request is getting lost n not transmitted to

  or more nodes in remote dc.

  I am not sure about exact distance.

  But they are connected with a very high speed 10gbps


  When you say

  different TCP stack tuning..do u have any

  describing recommendations for multi Dc Cassandra

  Can you elaborate what all settings

   need to be different? 











  from Yahoo Mail on Android  From:"Bryan

  Cheng" <bryan <at> blockcypher.com>

  Date:Tue, 17 Nov, 2015 at 5:54


  Subject:Re: Repair

   Hangs while requesting Merkle Trees


   Hi Anuj,

  Did you mean

  streaming_socket_timeout_in_ms? If not, then you

  want that set. Even the best network connections will

  occasionally, and in Cassandra < 2.1.10 (I believe)

  would leave those connections hanging indefinitely on


  How far away are

  your two DC's from a network perspective, out of

  curiosity? You'll almost certainly be doing

  TCP stack tuning for cross-DC, notably your buffer

  window params, cassandra-specific stuff like

  otc_coalescing_strategy, inter_dc_tcp_nodelay,


  On Sat, Nov 14, 2015 at

  10:35 AM, Anuj Wadehra <anujw_2003 <at> yahoo.co.in>

  One more observation.We observed

  that there are few TCP connections which node shows as

  Established but when we go to node at other

  is not there. They are called "phantom"

  connections I guess. Can this be a possible cause?




  from Yahoo Mail on Android  From:"Anuj

  Wadehra" <anujw_2003 <at> yahoo.co.in>

  Date:Sat, 14 Nov, 2015 at 11:59


  Subject:Re: Repair Hangs


   requesting Merkle Trees


   Thanks Daemeon


  I wil capture the output

  of netstats and share in next few days. We were thinking

  taking tcp dumps also. If its a network issue and

  request timeout worked, not sure how Cassandra is

  messages based on timeout.Repair messages are non

  and not supposed to be timedout.

  2 of the 3 nodes in the DC are able

  to complete repair without any issue. Just one node is


  I also observed

  frequent messages in logs of other

   nodes which say that hints replay timedout..and the

  where hints were being replayed is always a remote dc

   node. Is it related some how?


  from Yahoo Mail on Android  From:"daemeon

  reiydelle" <daemeonr <at> gmail.com>

  Date:Thu, 12 Nov, 2015 at 10:34 am

  Subject:Re: Repair Hangs while

  requesting Merkle Trees



   Have you checked the network

  statistics on that machine? (netstats -tas) while

  to repair ... if netstats show ANY issues

   you have a problem. If you can put the command in a

  running every 60 seconds for maybe 15 minutes and post



  Out of curiousity,

  how many remote DC nodes are getting successfully





  “Life should not be a journey to the

  grave with the intention of

   arriving safely in a

  pretty and well

  preserved body, but rather to skid

   in broadside in a cloud of smoke,

  thoroughly used up, totally worn out,

   and loudly proclaiming “Wow! What a Ride!”

  - Hunter Thompson


  Daemeon C.M. Reiydelle

  USA (+1)

  London (+44) (0)

  20 8144 9872



  On Wed, Nov 11, 2015 at

  1:06 PM, Anuj Wadehra <anujw_2003 <at> yahoo.co.in>


  we are using 2.0.14. We

   have 2 DCs at remote locations with 10GBps

  are able to

  complete repair (-par -pr) on 5 nodes. On only one node

  DC2, we are

  unable to complete repair as it always hangs. Node

  Merkle Tree

  requests, but one or more nodes in DC1 (remote) never

  that they

  sent the merkle tree reply to requesting node.

  Repair hangs infinitely.


  After increasing request_timeout_in_ms on

  affected node, we were able to successfully run repair

  one of the two occassions.



   comments, why this is happening on just one node? In

  OutboundTcpConnection.java,  when isTimeOut method

  returns false

  for non-droppable verb such as Merkle Tree

  Request(verb=REPAIR_MESSAGE),why increasing request


  problem on one occasion ?



  Anuj Wadehra




       On Thursday, 12

  November 2015 2:35 AM, Anuj Wadehra <anujw_2003 <at> yahoo.co.in>




  We have 2 DCs at remote

  locations with 10GBps connectivity.We are able to

  repair (-par -pr) on 5 nodes. On only one node in DC2,

  are unable to complete repair as it always hangs. Node

  Merkle Tree requests, but one or more nodes in DC1

  never show that they sent the merkle tree reply to

  requesting node.

  Repair hangs infinitely.



  After increasing

  request_timeout_in_ms on affected node, we were able to

  successfully run repair on one of the two occassions.


  Any comments, why this is

  happening on just one node? In

  when isTimeOut method always returns false for

  verb such as Merkle Tree


   request timeout solved problem on one occasion ?



  Anuj Wadehra

Carlos A | 28 Nov 03:45 2015

Issues on upgrading from 2.2.3 to 3.0

Hello all,

I had 2 of my systems upgraded to 3.0 from the same previous version.

The first cluster seem to be fine.

But the second, each node starts and then fails.

On the log I have the following on all of them:

INFO  [main] 2015-11-27 19:40:21,168 ColumnFamilyStore.java:381 - Initializing system_schema.keyspaces
INFO  [main] 2015-11-27 19:40:21,177 ColumnFamilyStore.java:381 - Initializing system_schema.tables
INFO  [main] 2015-11-27 19:40:21,185 ColumnFamilyStore.java:381 - Initializing system_schema.columns
INFO  [main] 2015-11-27 19:40:21,192 ColumnFamilyStore.java:381 - Initializing system_schema.triggers
INFO  [main] 2015-11-27 19:40:21,198 ColumnFamilyStore.java:381 - Initializing system_schema.dropped_columns
INFO  [main] 2015-11-27 19:40:21,203 ColumnFamilyStore.java:381 - Initializing system_schema.views
INFO  [main] 2015-11-27 19:40:21,208 ColumnFamilyStore.java:381 - Initializing system_schema.types
INFO  [main] 2015-11-27 19:40:21,215 ColumnFamilyStore.java:381 - Initializing system_schema.functions
INFO  [main] 2015-11-27 19:40:21,220 ColumnFamilyStore.java:381 - Initializing system_schema.aggregates
INFO  [main] 2015-11-27 19:40:21,225 ColumnFamilyStore.java:381 - Initializing system_schema.indexes
ERROR [main] 2015-11-27 19:40:21,831 CassandraDaemon.java:250 - Cannot start node if snitch's rack differs from previous rack. Please fix the snitch or decommission and rebootstrap this node.

It asks to "Please fix the snitch or decommission and rebootstrap this node"

If none of the nodes can go up, how can I decommission all of them?

Doesn't make sense.

Any suggestions?


Vasiliy I Ozerov | 27 Nov 11:52 2015

Huge ReadStage Pending tasks during startup


We have some strange troubles with cassandra startup. Cluster consists of 4 nodes. 32 Gb RAM per node, each node has about 30Gb of data, 8 CPU.

root <at> vega010:~# nodetool version ReleaseVersion: 2.2.1

So, before stop (using disablethrift, drain):

nodetool tpstats: Read Stage 0 0 3093579 0 0

Just after start in logs:

INFO [main] http://airmail.calendar/2015-11-25%2013:22:04%20GMT+3 YamlConfigurationLoader.java:92 - Loading settings from file:/etc/cassandra/cassandra.yaml 
. . . skipped . . .
INFO [main] http://airmail.calendar/2015-11-25%2013:22:21%20GMT+3 CommitLog.java:168 - Replaying /var/lib/cassandra/commitlog/CommitLog–5–1448388020045.log, /var/lib/cassandra/commitlog/CommitLog–5–1448388020046.log, /var/lib/cassand
. . .skipped. . .
INFO [main] http://airmail.calendar/2015-11-25%2013:23:44%20GMT+3 CommitLog.java:170 - Log replay complete, 1047857 replayed mutations
. . . skipped .. .
INFO [CompactionExecutor:4] http://airmail.calendar/2015-11-25%2013:23:45%20GMT+3 CompactionTask.java:142 - Compacting (cf08d1d0–93ba–11e5-b9f0–7be7ca1986fb) [/var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/la–3479-big-Data.db:level=0, /var/lib/cassandra/data/system/compaction_history-b4dbb7b4dc493fb5b3bfce6e434832ca/la–3474-big-Data.db:level=0, /var/lib/cassandra/data/system/compaction_history-b4db
. . . skipped. . .
INFO [HANDSHAKE-/] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 OutboundTcpConnection.java:494 - Handshaking version with /
INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:1003 - Node / has restarted, now UP
WARN [GossipTasks:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 FailureDetector.java:243 - Not marking nodes down due to local pause of 101075806441 > 5000000000
INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 StorageService.java:1869 - Node / state jump to normal
INFO [HANDSHAKE-/] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 OutboundTcpConnection.java:494 - Handshaking version with /
INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:1003 - Node / has restarted, now UP
INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 StorageService.java:1869 - Node / state jump to normal
INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:1003 - Node / has restarted, now UP
INFO [HANDSHAKE-/] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 OutboundTcpConnection.java:494 - Handshaking version with /
INFO [HANDSHAKE-/] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 OutboundTcpConnection.java:494 - Handshaking version with /
INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 StorageService.java:1869 - Node / state jump to normal
INFO [SharedPool-Worker–20] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP
INFO [main] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 ColumnFamilyStore.java:743 - Completed loading (557 ms; 7022 shards) counter cache for SourcesAggregatedEventsV2.StoryReadingTimeSumPerDay_UTC_P_7
INFO [HANDSHAKE-/] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 OutboundTcpConnection.java:494 - Handshaking version with /
INFO [main] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 AutoSavingCache.java:146 - reading saved cache /var/lib/cassandra/saved_caches/SourcesAggregatedEventsV2-StoryReadingTimeSumPerDay_UTC_N_2-f318e310735f11e5b9599b83dc51d0b0-CounterCache-c.db
INFO [SharedPool-Worker–13] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP INFO [SharedPool-Worker–3] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP
INFO [SharedPool-Worker–16] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 StorageService.java:1869 - Node / state jump to normal
INFO [SharedPool-Worker–4] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP INFO [SharedPool-Worker–20] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP
INFO [SharedPool-Worker–1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP INFO [SharedPool-Worker–5] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP
INFO [SharedPool-Worker–2] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 Gossiper.java:970 - InetAddress / is now UP INFO [GossipStage:1] http://airmail.calendar/2015-11-25%2013:23:46%20GMT+3 StorageService.java:1869 - Node / state jump to normal
INFO [ScheduledTasks:1] http://airmail.calendar/2015-11-25%2013:25:55%20GMT+3 StatusLogger.java:51 - Pool Name Active Pending Completed Blocked All Time Blocked
INFO [ScheduledTasks:1] http://airmail.calendar/2015-11-25%2013:25:55%20GMT+3 StatusLogger.java:55 - ReadStage 32 2753202 69509 0 0
INFO [ScheduledTasks:1] http://airmail.calendar/2015-11-25%2013:25:55%20GMT+3 StatusLogger.java:55 - MutationStage 32 602 9197964 0 0

So, just after start it has 2753202 pending readstage tasks. And it takes about 11 hours to complete them all.

So, what could be the reason? 

Vasiliy I Ozerov
Sent with Airmail
Hadmut Danisch | 26 Nov 16:10 2015

Three questions about cassandra


I'm currently reading through heaps of docs and web pages to learn
cassandra, but there's still three questions I could not find answers
for, maybe someone could help:

1. What happens, if a node is down for some time (hours, days,
   weeks,...) for whatever reason (hardware, power, or network
   failure, maintenance...) and gets back online?

   Does the node remain in its former state and thus become
   inconsistent, have outdated data, or does it update the changes
   that occured during its downtime from other nodes?

   Can nodes be easily offline for some time, then return and proceed,
   or do they have to be added as a fresh node replacement (of their
   own) to start from scratch?

2. cassandra allows to choose from several data consistency levels,
   especially allowing write access that does not update all nodes
   (i.e. QUORUM, ONE, TWO, THREE). 

   What happens with those nodes who did not get an update? Will they
   synchronize with the updated nodes automatically, or will they
   remain in their old state (forever or until next explicit write

3. What exactly happens, when a new node is added to a cluster? Will
   all records now belonging to the new node be automatically shifted
   from others?

   Web page
   describes a "streaming process", which sounds as if a new node was
   busy to collect it's belongings from others, but it also says to
   perform a

   nodetool cleanup

   on all the old nodes, which would "remove the keys no longer
   belonging to those nodes", which rather sounds like a simple drop,
   i.e. having those records lost. 

   So does cassandra safely fill new nodes, or do they start as empty
   ones and their data is lost?

Thank you!


Badrjan | 26 Nov 10:02 2015

Change the rack of a server

So I have a 8 node cluster and I would like to change the rack of one node. How should I do that? 

Luigi Tagliamonte | 26 Nov 09:55 2015

Cassandra Cleanup and disk space

Hi Everyone,
I'd like to understand what cleanup does on a running cluster when there is no cluster topology change, i did a test and i saw the cluster disk space shrink of 200GB.
I'm using cassandra 2.1.9.
“The only way to get smarter is by playing a smarter opponent.”
Sergey Panov | 26 Nov 00:45 2015

OpsCenter doen not work with Cassandra 3.0


Today we tried to setup DataStaxOpsCenter- to 
work with Cassandra 3.0.

OpsCenter logs have the following:

2015-11-26 02:17:39+0300 [] INFO: Starting factory 
<cassandra.io.twistedreactor.TwistedConnectionClientFactory instance at 
2015-11-26 02:17:40+0300 [] INFO: Stopping factory 
<cassandra.io.twistedreactor.TwistedConnectionClientFactory instance at 
2015-11-26 02:17:40+0300 [] WARN: [control connection] Error connecting 
to Unexpected response during Connection setup: 
ProtocolError('Server pr
otocol version (4) does not match the specified driver protocol version 
(2). Consider setting Cluster.protocol_version to 4.',)
2015-11-26 02:17:40+0300 [] ERROR: Control connection failed to connect, 
shutting down Cluster: ('Unable to connect to any servers', 
{u'': Protocol
Error("Unexpected response during Connection setup: 
ProtocolError('Server protocol version (4) does not match the specified 
driver protocol version (2). Consid
er setting Cluster.protocol_version to 4.',)",)})
2015-11-26 02:17:40+0300 [] WARN: ProcessingError while calling 
CreateClusterConfController: Unable to connect to cluster. Error is: 
Unable to connect to any
seed nodes, tried [u'']

How can we switch to protocol_version to 4?
Does OpsCenter officially support the latest version of Cassandra?
Did anybody try to setup it?

Please advice. Thank you!



Sergey Panov

Sergey Panov | 26 Nov 00:36 2015

Unable to use using multiple network interfaces in Cassandra 3.0


We faced with the issue that Cassandra 3.0 does not work when we tried 
to set up private and public networks usage.
The following doc was used: 

First node ( settings
- seeds: ",,"
# private IP
# public IP
internode_compression: none


Fourth node ( settings
- seeds: ",,"
# private IP
# public IP
internode_compression: none

All seeds detects only themself, other nodes show errors:

assandraDaemon.java:702 - Exception encountered during startup
java.lang.RuntimeException: Unable to gossip with any seeds


Please advice. Thank you!



Sergey Panov

Rich Bowen | 25 Nov 18:32 2015

[ANNOUNCE] CFP open for ApacheCon North America 2016

Community growth starts by talking with those interested in your
project. ApacheCon North America is coming, are you?

We are delighted to announce that the Call For Presentations (CFP) is
now open for ApacheCon North America. You can submit your proposed
sessions at
for big data talks and
for all other topics.

ApacheCon North America will be held in Vancouver, Canada, May 9-13th
2016. ApacheCon has been running every year since 2000, and is the place
to build your project communities.

While we will consider individual talks we prefer to see related
sessions that are likely to draw users and community members. When
submitting your talk work with your project community and with related
communities to come up with a full program that will walk attendees
through the basics and on into mastery of your project in example use
cases. Content that introduces what's new in your latest release is also
of particular interest, especially when it builds upon existing well
know application models. The goal should be to showcase your project in
ways that will attract participants and encourage engagement in your
community, Please remember to involve your whole project community (user
and dev lists) when building content. This is your chance to create a
project specific event within the broader ApacheCon conference.

Content at ApacheCon North America will be cross-promoted as
mini-conferences, such as ApacheCon Big Data, and ApacheCon Mobile, so
be sure to indicate which larger category your proposed sessions fit into.

Finally, please plan to attend ApacheCon, even if you're not proposing a
talk. The biggest value of the event is community building, and we count
on you to make it a place where your project community is likely to
congregate, not just for the technical content in sessions, but for
hackathons, project summits, and good old fashioned face-to-face networking.


rbowen <at> apache.org