Igor Novgorodov | 15 Jul 15:56 2014

Squid 3.4 very high cpu usage

I've seen a February thread about this problem, but it seems that it 
never reached a consensus.

I've just tried to migrate from 3.3.12 to 3.4.6, but almost instantly 
got timeout problems and 100% cpu usage by squid process.
I'm using kerberos auth and external_ldap_group helpers, ssl bump, 
config will be attached below.

Any caching (memory or on-disk) is disabled during compile-time:
         ./configure \
         --prefix=/opt/squid \
         --sysconfdir=/etc/squid \
         --disable-loadable-modules \
         --disable-wccp \
         --disable-wccpv2 \
         --disable-eui \
         --disable-htcp \
         --disable-select \
         --disable-poll \
         --with-pthreads \
         --disable-storeio \
         --disable-disk-io \
         --disable-removal-policies \
         --enable-delay-pools \
         --disable-useragent-log \
         --disable-referer-log \
         --enable-ssl \
         --enable-ssl-crtd \
         --disable-cache-digests \
         --enable-icap-client \
(Continue reading)

amaury@tin.it | 15 Jul 10:45 2014

Re: problem streaming video

Setting option:

via off
forwarded_for delete

Best regards,


Cameron Charles | 15 Jul 09:04 2014

Confusing external acl, reply_body_max_size and EXT_LOG combo issue


Im having some confusing trouble with an external acl based
reply_body_max_size setup, but only when the ext_log is brought into

I have an external acl setup as such:

> external_acl_type response_size_type ttl=300 children-startup=2 children-idle=1
children-max=10 %URI %EXT_LOG %TAG python max_file_size_ext_acl.py

which is used to check against some external data to cache the
response for the reply_body_max_size directive to use, an example of
which is this:

> acl response_size_31 external response_size_type 31
> http_access allow response_size_31
> reply_body_max_size 31 MB response_size_31

now this works perfectly fine, no issues what so ever, until the
external acl alters the EXT_LOG (and passes it back), pretty much any
alteration to the ext_log data causes squid to basically ignore the
answer it gets back from the external acl and continue on.
The external acl can take in the ext_log and pass it untouched out the
other side no issues too, so it doesnt appear to be simply the fact
its passing the ext_log back.

Im really stumped at to whats going on here, any help would be appreciated.

Cameron Charles
(Continue reading)

Edwin Marqe | 14 Jul 19:46 2014

Host header forgery policy

Hi all,

After an upgrade of squid3 to version 3.3.8-1ubuntu6, I got the
unpleasant surprise of what is called the "Host header forgery

I've read the documentation of this part, and although I understand
the motivation of its implementation, I honestly see not very
practical implementing this without the possibility of disabling it,
basically because not all scenarios fit the requirements written on
the documentation.

I have about 30 clients and I've configured squid3 to be a transparent
proxy on port 3128 on a remote server. The entry point is port 8080
which is then redirected on the same host to the port 3128.

However, *any* opened URL throws the warning:

2014/07/14 19:21:52.612| SECURITY ALERT: Host header forgery detected
on local= remote= FD 9 flags=33 (local IP
does not match any domain IP)
2014/07/14 19:21:52.612| SECURITY ALERT: By user agent: Mozilla/5.0
(Windows NT 6.1; WOW64; rv:30.0) Gecko/20100101 Firefox/30.0
2014/07/14 19:21:52.612| SECURITY ALERT: on URL: google.com:443
2014/07/14 19:21:52.612| abandoning local=
remote= FD 9 flags=33

I have manually configured the browser of these clients - the problem
is that in the company's network I have my DNS servers and on the
remote host (where the Squid server is running) there are others, and
(Continue reading)

Patrick Chemla | 14 Jul 17:30 2014

Problem to set up multi-cpu multi-ports squid 3.3.12


I have a multi-ports config of squid running from version 3.1.19 
upgraded to 3.3.12. Working like a charm, but the traffic is reaching 
one cpu limit.

I want to use SMP capabilities with SMP workers on my 8 cpus/64G mem 
Fedora 20 box.

I saw in the 
that workers can share http_ports, right?

When I run with workers 1 , I can see the squid-1 process listen on the 
designed port with netstat.

When I run with workers greater than 1, I can see processes squid-1, 
squid-2... squid-n with ps -ef|fgrep squid, but not any  process 
listening to any tcp port with netstat -apn (I see all processes 
listening to udp ports).

I can't find any configuration example featuring SMP workers capability 
for squid 3.3.12, including http_ports lines.

Could any one help me there?

Thanks a lot

(Continue reading)

Eliezer Croitoru | 14 Jul 17:05 2014

squid head RPM for the request.

I got a request for squid HEAD RPMs privately and I now it's public.
It can be found here:

It's based on the sources of:

I do hope to release 3.4.6 in the next week but since I have been 
walking over the bugs at the bugzilla it takes time to understand what 
will affect the new release and what will not.

I do consider to release only 3.4.7 due to couple issues.


amaury@tin.it | 14 Jul 15:14 2014

502 Bad Gateway

I have a problem with
- squid-3.3.9
- squid-3.4.5 
but NO 
problem with:
- squid-2.7.stable9
- without proxy

I have tested with 
firefox 24.6 and ie explorer 8.0.

On browser the error displayed is:

The following error was encountered while trying to retrieve the URL: http://www.regione.lombardia.it/

    Read Error

The system returned: (104) Connection reset by peer

An error condition occurred while reading data from the network. Please 
retry your request.

Your cache administrator is .......

On access.log

(on ver 3.3.9) 
1405342317.708      7 xxx.xxx.xxx.xxx:52686 
TCP_MISS/502 4072 GET http://www.regione.lombardia.it/- HIER_DIRECT/www.regione.lombardia.it
(Continue reading)

Klaus Reithmaier | 14 Jul 12:21 2014

Define two cache_peer directives with same IP but different ports


I  have two machines with each two squid processes. I want that every  process is querying the other three over
htcp if it has a specified  element in its cache.

So this is my setting:

--------------------------------     --------------------------------
| Proxyserver1: IP |     | Proxyserver2: IP |
--------------------------------     --------------------------------
  | squid1: Port 8080 |                   | squid1: Port 8080 |
  | squid2: Port 8081 |                   | squid2: Port 8081 |
  ---------------------                   ---------------------

This is the cache_peer configuration on Proxyserver1 process squid1:
-- START config --
cache_peer sibling 8081 4828 proxy-only htcp
cache_peer sibling 8080 4827 proxy-only htcp
cache_peer sibling 8081 4828 proxy-only htcp
-- END config --

It's obvious, that
cache_peer sibling 8080 4827 proxy-only htcp and
cache_peer sibling 8081 4828 proxy-only htcp
are different proxies, because they are using different ports. But squid can't be started:

FATAL: ERROR: cache_peer specified twice
Squid Cache (Version 3.3.12): Terminated abnormally.

How can I define two siblings on the same machine?
(Continue reading)

Jason Haar | 14 Jul 05:57 2014

feature request for sslbump

Hi there

I've started testing sslbump with "ssl_bump server-first" and have
noticed something (squid-3.4.5)

If your clients have the "Proxy CA" cert installed and go to legitimate
https websites, then everything works perfectly (excluding Chrome with
it's pinning, but there's no way around that). However, if someone goes
to a https website with either a self-signed cert or a server cert
signed by an unknown CA, then squid generates a "legitimate" SSL cert
for the site, but shows the squid error page to the browser - telling
them the error

The problem with that model is that it means no-one can get to websites
using self-signed certs. Using "sslproxy_cert_adapt" to allow such
self-signed certs is not a good idea - as then squid is effectively
legitimizing the server - which may be a Very Bad Thing

So I was thinking, how about if squid (upon noticing the external site
isn't trustworthy) generates a deliberate self-signed server cert itself
(ie not signed by the Proxy CA)? Then the browser would see the
untrusted cert, the user would get the popup asking if they want to
ignore cert errors, and can then choose whether to trust it or not. That
way the user can still get to sites using self-signed certs, and the
proxy gets to "see" into the content, potentially running AVs over

...or haven't I looked hard enough and this is already an option? :-)

(Continue reading)

freefall12 | 12 Jul 12:43 2014

how can i get the localport in forward proxy mode?

i use iptables to redirect a range of ports to the squid listening port, and
i want to get the port in the TCP packet in access log instead of the
listing port. Sadly, The localport seems only available when using intercept
or transparent mode, otherwise it's the same as the listening port. Thank

View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/how-can-i-get-the-localport-in-forward-proxy-mode-tp4666888.html
Sent from the Squid - Users mailing list archive at Nabble.com.

Walter H. | 12 Jul 09:50 2014

Re: Fwd: gmail.com certificate name mismatch


On 16.06.2014 19:12, Alex Rousskov wrote:
> On 06/16/2014 10:58 AM, Walter H. wrote:
>> I found something strange in connection with server-first and google ...
>> any browser:  IE, googles own browser Chrome doesn't tell any problem
>> with ie. https://www.youtube.com
>> but FireFox does - you know the error when something with certificates
>> is not ok -  with this:
>> www.youtube.com uses an invalid security certificate.
>> The certificate is not trusted because it was issued by an invalid CA
>> certificate.
>> (Error code: sec_error_inadequate_key_usage)
>> can someone please explain why this is a problem only in FF?
> Please see http://bugs.squid-cache.org/show_bug.cgi?id=3966
> It appears to match your use case well and has a patch.
which squid fixes this problem? and has a rpm for CentOS?


(Continue reading)