Jörn Huxhorn | 23 Apr 23:06 2015

javadoc warnings for protoc-generated files

Java 8 javadoc doclint is generating a huge amount of warnings for files generated by protoc.

I’m pretty sure I created an issue for this in the issue tracker hosted at the googlecode site but it seems
they all vanished.

Has this been addressed in the meantime? Should I create a new issue at Github?

Cheers,
Jörn.

--

-- 
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.

Jeff Zhang | 23 Apr 14:47 2015
Picon

No protobuf 2.5 download ?

Could not find the download for 2.5 ? Anyone can point it ? THanks



--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
William Albenzi | 22 Apr 19:50 2015

Where is the source code for pre-2.4 releases?

I am attempting to get an older release of ProtoBuf.  When I attempt to visit https://code.google.com/p/protobuf/downloads/list as indicated on https://developers.google.com/protocol-buffers/docs/downloads, I am redirected to github.  The oldest version I see tagged is 2.4.1 

From where can I get the older release source code?

Thank you for any suggestions,
Will Albenzi

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
Jan Tattermusch | 22 Apr 18:21 2015
Picon

C# protobuf support coming soon

Hi all,

there is a new group for C# protocol buffers "protobuf-csharp"

I am reposting its very first message here:

Google protocol buffers are planning to officially support C#. As a first step, we have migrated the protobuf-csharp-port project (owned by Jon Skeet) and rewrote the entire codegen to become integral part of protoc (the protocol buffer compiler). The effort is still in early phase and everything is experimental, but all the code is already available in csharp branch of the google protobuf github repository.

The csharp branch:

As of now, C# protobufs support majority of proto2 features, but we are working towards providing full proto3 support for C#.

Jan

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
Wenwei Peng | 21 Apr 05:36 2015
Picon

The core of the core of the big data solutions -- Map

Title:       The core of the core of the big data solutions -- Map
Author:      pengwenwei
Email:       wenwei19710430
Language:    c++
Platform:    Windows, linux
Technology:  Perfect hash algorithm
Level:       Advanced
Description: Map algorithm with high performance
Section      MFC c++ map stl
SubSection   c++ algorithm
License:     (GPLv3)

    Download demo project - 1070 Kb
    Download source - 1070 Kb

Introduction:
For the c++ program, map is used everywhere.And bottleneck of program performance is often the performance of map.Especially in the case of large data,and the business association closely and unable to realize the data distribution and parallel processing condition.So the performance of map becomes the key technology.

In the work experience with telecommunications industry and the information security industry, I was dealing with the big bottom data,especially the most complex information security industry data,all can’t do without map.

For example, IP table, MAC table, telephone number list, domain name resolution table, ID number table query, the Trojan horse virus characteristic code of cloud killing etc..

The map of STL library using binary chop, its has the worst performance.Google Hash map has the optimal performance and memory at present, but it has repeated collision probability.Now the big data rarely use a collision probability map,especially relating to fees, can’t be wrong.

Now I put my algorithms out here,there are three kinds of map,after the build is Hash map.We can test the comparison,my algorithm has the zero probability of collision,but its performance is also better than the hash algorithm, even its ordinary performance has no much difference with Google.

My algorithm is perfect hash algorithm,its key index and the principle of compression algorithm is out of the ordinary,the most important is a completely different structure,so the key index compression  is fundamentally different.The most direct benefit for program is that for the original map need ten servers for solutions but now I only need one server.
Declare: the code can not be used for commercial purposes, if for commercial applications,you can contact me with QQ 75293192.
Download:
https://sourceforge.net/projects/pwwhashmap/files

Applications:
First,modern warfare can’t be without the mass of information query, if the query of enemy target information slows down a second, it could lead to the delaying fighter, leading to failure of the entire war. Information retrieval is inseparable from the map, if military products use pwwhashMap instead of the traditional map,you must be the winner.

Scond,the performance of the router determines the surfing speed, just replace open source router code map for pwwHashMap, its speed can increase ten times.
There are many tables to query and set in the router DHCP ptotocol,such as IP,Mac ,and all these are completed by map.But until now,all map are  using STL liabrary,its performance is very low,and using the Hash map has error probability,so it can only use multi router packet dispersion treatment.If using pwwHashMap, you can save at least ten sets of equipment.

Third,Hadoop is recognized as the big data solutions at present,and its most fundamental thing is super heavy use of the map,instead of SQL and table.Hadoop assumes the huge amounts of data so that the data is completely unable to move, people must carry on the data analysis in the local.But as long as the open source Hadoop code of the map changes into pwwHashMap, the performance will increase hundredfold without any problems.


Background to this article that may be useful such as an introduction to the basic ideas presented:
http://blog.csdn.net/chixinmuzi/article/details/1727195


--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
张裕 | 21 Apr 12:10 2015
Picon

Is there anly decode on fly solution for Python?

Hi,

I recently try to decode a customized file type used internally, that file contains one meta data and some frames, each frame is encoded as protobuf.
while the meta data is the definition of the message type.

Now I use protoc to generate .proto and .py files from the meta data, and use those .py files to decode the content of frames. But the performance is a problem, is there any decoding on fly solution for it?

Thanks!

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
Wenwei Peng | 21 Apr 05:31 2015
Picon

The core of the core of the big data solutions -- Map

Title:       The core of the core of the big data solutions -- Map
Author:      pengwenwei
Email:       wenwei19710430
Language:    c++
Platform:    Windows, linux
Technology:  Perfect hash algorithm
Level:       Advanced
Description: Map algorithm with high performance
Section      MFC c++ map stl
SubSection   c++ algorithm
License:     (GPLv3)

    Download demo project - 1070 Kb
    Download source - 1070 Kb

Introduction:
For the c++ program, map is used everywhere.And bottleneck of program performance is often the performance of map.Especially in the case of large data,and the business association closely and unable to realize the data distribution and parallel processing condition.So the performance of map becomes the key technology.

In the work experience with telecommunications industry and the information security industry, I was dealing with the big bottom data,especially the most complex information security industry data,all can’t do without map.

For example, IP table, MAC table, telephone number list, domain name resolution table, ID number table query, the Trojan horse virus characteristic code of cloud killing etc..

The map of STL library using binary chop, its has the worst performance.Google Hash map has the optimal performance and memory at present, but it has repeated collision probability.Now the big data rarely use a collision probability map,especially relating to fees, can’t be wrong.

Now I put my algorithms out here,there are three kinds of map,after the build is Hash map.We can test the comparison,my algorithm has the zero probability of collision,but its performance is also better than the hash algorithm, even its ordinary performance has no much difference with Google.

My algorithm is perfect hash algorithm,its key index and the principle of compression algorithm is out of the ordinary,the most important is a completely different structure,so the key index compression  is fundamentally different.The most direct benefit for program is that for the original map need ten servers for solutions but now I only need one server.
Declare: the code can not be used for commercial purposes, if for commercial applications,you can contact me with QQ 75293192.
Download:
https://sourceforge.net/projects/pwwhashmap/files

Applications:
First,modern warfare can’t be without the mass of information query, if the query of enemy target information slows down a second, it could lead to the delaying fighter, leading to failure of the entire war. Information retrieval is inseparable from the map, if military products use pwwhashMap instead of the traditional map,you must be the winner.

Scond,the performance of the router determines the surfing speed, just replace open source router code map for pwwHashMap, its speed can increase ten times.
There are many tables to query and set in the router DHCP ptotocol,such as IP,Mac ,and all these are completed by map.But until now,all map are  using STL liabrary,its performance is very low,and using the Hash map has error probability,so it can only use multi router packet dispersion treatment.If using pwwHashMap, you can save at least ten sets of equipment.

Third,Hadoop is recognized as the big data solutions at present,and its most fundamental thing is super heavy use of the map,instead of SQL and table.Hadoop assumes the huge amounts of data so that the data is completely unable to move, people must carry on the data analysis in the local.But as long as the open source Hadoop code of the map changes into pwwHashMap, the performance will increase hundredfold without any problems.


Background to this article that may be useful such as an introduction to the basic ideas presented:
http://blog.csdn.net/chixinmuzi/article/details/1727195


--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
Wenwei Peng | 21 Apr 05:32 2015
Picon

The core of the core of the big data solutions -- Map

Title:       The core of the core of the big data solutions -- Map
Author:      pengwenwei
Email:       wenwei1971043
Language:    c++
Platform:    Windows, linux
Technology:  Perfect hash algorithm
Level:       Advanced
Description: Map algorithm with high performance
Section      MFC c++ map stl
SubSection   c++ algorithm
License:     (GPLv3)

    Download demo project - 1070 Kb
    Download source - 1070 Kb

Introduction:
For the c++ program, map is used everywhere.And bottleneck of program performance is often the performance of map.Especially in the case of large data,and the business association closely and unable to realize the data distribution and parallel processing condition.So the performance of map becomes the key technology.

In the work experience with telecommunications industry and the information security industry, I was dealing with the big bottom data,especially the most complex information security industry data,all can’t do without map.

For example, IP table, MAC table, telephone number list, domain name resolution table, ID number table query, the Trojan horse virus characteristic code of cloud killing etc..

The map of STL library using binary chop, its has the worst performance.Google Hash map has the optimal performance and memory at present, but it has repeated collision probability.Now the big data rarely use a collision probability map,especially relating to fees, can’t be wrong.

Now I put my algorithms out here,there are three kinds of map,after the build is Hash map.We can test the comparison,my algorithm has the zero probability of collision,but its performance is also better than the hash algorithm, even its ordinary performance has no much difference with Google.

My algorithm is perfect hash algorithm,its key index and the principle of compression algorithm is out of the ordinary,the most important is a completely different structure,so the key index compression  is fundamentally different.The most direct benefit for program is that for the original map need ten servers for solutions but now I only need one server.
Declare: the code can not be used for commercial purposes, if for commercial applications,you can contact me with QQ 75293192.
Download:
https://sourceforge.net/projects/pwwhashmap/files

Applications:
First,modern warfare can’t be without the mass of information query, if the query of enemy target information slows down a second, it could lead to the delaying fighter, leading to failure of the entire war. Information retrieval is inseparable from the map, if military products use pwwhashMap instead of the traditional map,you must be the winner.

Scond,the performance of the router determines the surfing speed, just replace open source router code map for pwwHashMap, its speed can increase ten times.
There are many tables to query and set in the router DHCP ptotocol,such as IP,Mac ,and all these are completed by map.But until now,all map are  using STL liabrary,its performance is very low,and using the Hash map has error probability,so it can only use multi router packet dispersion treatment.If using pwwHashMap, you can save at least ten sets of equipment.

Third,Hadoop is recognized as the big data solutions at present,and its most fundamental thing is super heavy use of the map,instead of SQL and table.Hadoop assumes the huge amounts of data so that the data is completely unable to move, people must carry on the data analysis in the local.But as long as the open source Hadoop code of the map changes into pwwHashMap, the performance will increase hundredfold without any problems.


Background to this article that may be useful such as an introduction to the basic ideas presented:
http://blog.csdn.net/chixinmuzi/article/details/1727195

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
Gianluca Borello | 16 Apr 06:56 2015
Picon

Lazy or sub-message parsing

Hello,

At my current company we're heavy protobuf users, but we started
facing a big limitation: in the current Java implementation, since
there's no lazy parsing, or ability to do direct sub-message parsing,
we pay a huge penalty of time and, especially, memory (we have an
articulated schema and the memory footprint is often 5X+ the
serialized size) even when we just want to read a small subset of
fields of a big protobuf (~10-100 KB).

I understand that, according to [1] and [2], such feature has not yet
been released and it will come at some point, but nonetheless I'd like
to ping again to see if there's a more defined ETA (of course I
realize this is an open source project so support and roadmap are
shared at a best-effort).

I am aware that there are projects (like flatbuffers) that might
better suited for our use case, but we would really like to stick to
protobuf because we have a huge code base already working on that, and
we consider it the most stable and robust solution available.

Thanks.

[1] https://groups.google.com/forum/#!searchin/protobuf/lazy|sort:date/protobuf/2P4ge8MfYKw/dcsmXDPaE84J
[2] https://groups.google.com/forum/#!searchin/protobuf/lazy|sort:date/protobuf/G9Jhv_xrxqg/tYcXzyy6zV8J

--

-- 
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.

Tristan Soriano | 16 Apr 12:29 2015
Picon

Errors in java generated HashCode() method


Hi,
I am new to protobufers and I have some issues with the generated java code.
I am successfully generating a java class from the following .proto file :

////////////////////////////////////////////////////////////////////////////////////////////////////////
option java_package = "org.myname.hbase.Coprocessor.autogenerated";
option java_outer_classname = "Sum";
option java_generic_services = true;
option java_generate_equals_and_hash = true;
option optimize_for = SPEED;
message SumRequest {
    required string family = 1;
    required string column = 2;
}
 
message SumResponse {
  required int64 sum = 1 [default = 0];
}
 
service SumService {
  rpc getSum(SumRequest)
    returns (SumResponse);
}

//////////////////////////////////////////////////////////////////////////////////////////////////////

However the generated code has error in SumResquest and SumResponse generated class :
The auto generated method : HashCode( ) use an int "memoizedHashCode" as if it was an instance attribute but it is never declared.
In SumRequest :

    <at> java.lang.Override
    public int hashCode() {
      if (memoizedHashCode != 0) {
        return memoizedHashCode;
      }
      int hash = 41;
      hash = (19 * hash) + getDescriptorForType().hashCode();
      if (hasFamily()) {
        hash = (37 * hash) + FAMILY_FIELD_NUMBER;
        hash = (53 * hash) + getFamily().hashCode();
      }
      if (hasColumn()) {
        hash = (37 * hash) + COLUMN_FIELD_NUMBER;
        hash = (53 * hash) + getColumn().hashCode();
      }
      hash = (29 * hash) + getUnknownFields().hashCode();
      memoizedHashCode = hash;
      return hash;
    }

//////////////////////////////////////////////////////////////////////////////////////////////////////
In SumResponse I have an extra error : The method hashLong(long) is undefined for the type Internal

    <at> java.lang.Override
    public int hashCode() {
      if (memoizedHashCode != 0) {
        return memoizedHashCode;
      }
      int hash = 41;
      hash = (19 * hash) + getDescriptorForType().hashCode();
      if (hasSum()) {
        hash = (37 * hash) + SUM_FIELD_NUMBER;
        hash = (53 * hash) + com.google.protobuf.Internal.hashLong(
            getSum());
      }
      hash = (29 * hash) + getUnknownFields().hashCode();
      memoizedHashCode = hash;
      return hash;
    }


As we are not supposed to edit the code. Do you have any idea what the solution could be ?
Should I add the missing attribute on my own, at what level and visibility ? How to solve my second issue ?
Is there any risk to add directly "getSum( ) ?

Thx

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.
vinod pangul | 15 Apr 10:22 2015
Picon

Need help in flatbuffers environment setup on Windows System.

Hi all,
 I am working for the first time on FlatBuffers for my project in embedded system. I have eclipse IDE with some drivers and APIs implemented. 
I want to implement flatBuffers for wireless communication between two controllers. Can you please help me how can I set up flatbuffer environment with Eclipse IDE (Windows system).

--
You received this message because you are subscribed to the Google Groups "Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email to protobuf+unsubscribe <at> googlegroups.com.
To post to this group, send email to protobuf <at> googlegroups.com.
Visit this group at http://groups.google.com/group/protobuf.
For more options, visit https://groups.google.com/d/optout.

Gmane