asad | 21 Aug 15:39 2015

Using Squid as forward http proxy failing to complete request?

I'm using Squid as local proxy.

I have the config file as below:-#

    # Recommended minimum configuration:
    # Example rule allowing access from your local networks.
    # Adapt to list your (internal) IP networks from where browsing
    # should be allowed
    acl localnet src    # RFC1918 possible internal network
    acl localnet src    # RFC1918 possible internal network
    acl localnet src    # RFC1918 possible internal network
    acl localnet src fc00::/7       # RFC 4193 local private network range
    acl localnet src fe80::/10      # RFC 4291 link-local (directly plugged) machines
    acl SSL_ports port 443
    acl Safe_ports port 80        # http
    acl Safe_ports port 21        # ftp
    acl Safe_ports port 443        # https
    acl Safe_ports port 70        # gopher
    acl Safe_ports port 210        # wais
    acl Safe_ports port 1025-65535    # unregistered ports
    acl Safe_ports port 280        # http-mgmt
    acl Safe_ports port 488        # gss-http
    acl Safe_ports port 591        # filemaker
    acl Safe_ports port 777        # multiling http
    acl CONNECT method CONNECT
    # Recommended minimum Access Permission configuration:
    # Only allow cachemgr access from localhost
    http_access allow localhost manager
    http_access deny manager
    # Deny requests to certain unsafe ports
    http_access deny !Safe_ports
    # Deny CONNECT to other than secure SSL ports
    http_access deny CONNECT !SSL_ports
    # We strongly recommend the following be uncommented to protect innocent
    # web applications running on the proxy server who think the only
    # one who can access services on "localhost" is a local user
    #http_access deny to_localhost
    # Example rule allowing access from your local networks.
    # Adapt localnet in the ACL section to list your (internal) IP networks
    # from where browsing should be allowed
    http_access allow localnet
    http_access allow localhost
    # And finally deny all other access to this proxy
    http_access deny all
    # Squid normally listens to port 3128
    http_port 3128
    # Uncomment the line below to enable disk caching - path format is /cygdrive/<full path to cache folder>, i.e.
    #cache_dir aufs /cygdrive/d/squid/cache 3000 16 256

    cache_peer parent 8080 0 no-query default       login=my_username:my_password
    never_direct allow all
    # Leave coredumps in the first cache dir
    coredump_dir /var/cache/squid
    # Add any of your own refresh_pattern entries above these.
    refresh_pattern ^ftp:        1440    20%    10080
    refresh_pattern ^gopher:    1440    0%    1440
    refresh_pattern -i (/cgi-bin/|\?) 0    0%    0
    refresh_pattern .        0    20%    4320
    max_filedescriptors 3200

Now, I want to use it as forward proxy For that the configuration is shown in the config file in **bold-text**.

I have browsed tons of web-pages and all have said to include a line similar to this. Beyond this configuration, I don't know what else to add in order to make it work.

One, thing under safe_port should I be changing the http port to "8080" since my local machine is already behind another proxy.

Also, I'm using domain authentication (NTLM) to connect to other proxy. Is authentication configuration required beside what already done in config file.

squid-users mailing list
squid-users <at>
Amos Jeffries | 21 Aug 09:28 2015

ssl_bump updates coming in 3.5.8

Hi all,

 Christos has managed (we think) to resolve a fairly major design issue
that has been plaguing the 3.5 series peek-and-splice feature so far.

The problem was that Squid was not actually following the intended and
documented logic of skipping the impossible bumping actions. The patch
for that will be in 3.5 snaphots labelled r13895 or later (still waiting
on mirror updates as I write this 1-2hrs more maybe).

Since it is affecting the visible behaviour of squid.conf settings I
would like some volunteers to help test it out. Find what problems
remain, and let me know what to alert others to in the next formal release.

We need testing both from those having issues currently, and those who
managed to get a trial-and-error config going with older 3.5.

Hopefully, if you are using the at_step workarounds there should not be
any visible difference. But some of the at_step tests may be needless now.

Thank you in advance for any assistance.

squid-users mailing list
squid-users <at>
Nicolaas Hyatt | 21 Aug 03:14 2015

ETA for Bug 3775

Hey guys,

I have been paying close attention to the list for a while and am just beginning to realize the scale of things the squid team has in front of them. So please understand that I’m _NOT_ begging here. I understand other priority issues take precedence, and my little issue is way on down the line. I was wondering if there was some sort of schedule as to when this may be examined in case I need to provide any more dumps.


Thanks in advance,


squid-users mailing list
squid-users <at>
Stakres | 20 Aug 16:56 2015

refresh_pattern by type mime

Hi All,

There is an existing case in the bugzilla
( speaking about this
request and it seems a good idea:
refresh_pattern by type mime

It would be very nice and cool to have this feature in squid to define
different min/max time per mime.
We could define script/html/css/etc... with a short time,
images/videos/audio/application/etc... with a long time...

Squid team, what is your opinion about that ?
Maybe already in the roadmap for the next 3.5.x build or the 4.x ?

Bye Fred

View this message in context:
Sent from the Squid - Users mailing list archive at
squid-users mailing list
squid-users <at>
Stakres | 20 Aug 16:38 2015

refresh_pattern and same objects

Hi All,

Maybe someone gets the info already...
A refresh_pattern with 1 week maxi, if the same object is "visited" (coming
from the squid cache) every day, will the object be deleted 1 week after the
first cache action or will the squid add +1 week each time the object is
used from the cache ?

My issue is if we cache a big object (windowsupdate, chrome, etc...) for 1
week or 6 months, do we have to download it again once the initial time is
over ?
Or can we expect the same big object will be available from the cache for a
very long time if it's visited at least 1 time before the end of the time

Windowsupdate 10 is about 2.6GB objects, if the max time is 1 month, I don't
want to re-download such size monthly, if it's used daily...

See what I mean ? 

Bye fred

View this message in context:
Sent from the Squid - Users mailing list archive at
squid-users mailing list
squid-users <at>
Peter | 20 Aug 09:59 2015

Has anyone a working config for windows update through squid?

We run squid 3.5.6 in a proxy server with FreeBSD 9.3.
Squid is the only way out, there is no transparency at all.
We have problems with windows update through squid.

I have looked at this: <at>
and this:

But they are both more than a year old.

I have entered the config recommendations from the Faq page above.
But reload-into-ims seems to be removed, I get syntax error when
I try to add that option. Even though this page still lists
reload-into-ims as a valid option:

Anyway, I wonder if anyone has a working config for
windows update through squid?


squid-users mailing list
squid-users <at>
John Pearson | 19 Aug 21:20 2015

Mac OS X Updates

Anyone have Mac OS X update caching working ? Without doing a SSL bump. I think they are hosted through https ( )

squid-users mailing list
squid-users <at>
Jorgeley Junior | 19 Aug 14:28 2015

Virtual Memory

Hi guys, sorry if I'm doing a dumb question...
My squid is using so much virtual memory how you can see in the print, is that normal?
I have 8GB physic memory and my squid is set to 4GB, my cache is like this:
cache_dir diskd /cache 4096 16 256 Q1=64 Q2=72
cache_dir diskd /cache 4096 16 256 Q1=64 Q2=72
cache_dir diskd /cache 4096 16 256 Q1=64 Q2=72
cache_dir diskd /cache 4096 16 256 Q1=64 Q2=72
cache_dir diskd /cache 4096 16 256 Q1=64 Q2=72



squid-users mailing list
squid-users <at>
adricustodio | 19 Aug 14:17 2015

Squid + Mysql

Hi guys, i got a question... again....

Im runing here Centos7 + squid 3.3.8
Im trying to set up a squid with mysql auth, but im kinda lost here...

For now squid is running fine on basic_ncsa_auth
I've created a mysql db called "squid" and a table called "users" with name,
pass (varchar) and status.

My squid did not come with mysql_auth, so i tried to create a new one and
tried to use basic_db_auth
mysql_auth is this

$link = mysqli_connect("localhost", "usuario_do_banco", "senha_do_banco");

if (!$link) {
   printf("Erro ao Conectar com o Banco de Dados: %s\n",

$selectdb = mysqli_select_db($link, "squid");

if (!$selectdb) {
   printf("Erro ao Abrir o Banco de Dados: %s\n", mysqli_error($link));

while ( fscanf(STDIN, "%s %s", $nome, $senha) ) {
   $select = "SELECT nome, senha FROM usuarios WHERE nome = '".$nome."' AND
status = 1";
   $Query = mysqli_query($link, $select);
   $nrRegistros = mysqli_num_rows($Query);
   $erro = true;

   while ( $Registro = mysqli_fetch_array($Query) ) {
      $erro = false;

      if ( crypt($senha, $Registro[senha]) ==    $Registro[senha] )
      else printf("ERR\n");
      if ($erro) printf("ERR\n");

My squid asks for user and password but do not authenticate...
on the basic_db_auth i didnt change anything, i dont know how to configure
squid to authenticate on mysql.

View this message in context:
Sent from the Squid - Users mailing list archive at
squid-users mailing list
squid-users <at>
Du, Hongfei | 19 Aug 13:45 2015

Re: squid-users Digest, Vol 12, Issue 33

Hi Eliezer and Amos
Sure, I have posted this to the squid-dev list, which is more relevant to this issue.

Many thanks for Amos' comments, really helpful information. For clarification, here we look at the HTTP
caching, and only look into the cache-dir selection algorithm rather than the peer selection algorithm
at this stage, namely, we create three separate folders in /var/spool/cache1, /var/spool/cache2,
/var/spool/cache3, and we intend to make squid intelligent enough to strictly put content(e.g. all
elements from a single URL as defined from one of our subscriber user) into a specified folder, rather than
following any build-in RR/LL rules which is based on the status(e.g., residual capacity) of the cache
server itself. For the RR/LL source codes we are looking for, it is really for the cache-dir selection
algorithm applied for local storage(the three folders as mentioned above.) Besides, refers to: " There
is algorithm(s) applied in layers to decide which type of storage area is use, then which one within the
selected type is most appropriate. Based on object availability, cacheability, size, storage area
speed, object popularity, and temporal relationships to others." , can you elaborate more on where we can
look into this algorithm(s)?

Best Regards,



Hongfei Du
Staff Engineer (UK Software)
InterDigital UK, Inc.
Shoreditch Business Center
64 Great Eastern Street
London,  EC2A 3QR
T: +44 207.749.9140
Hongfei.Du <at><>

[cid:image3d6f6e.BMP <at> 467ad0ef.48a6cd90]

This e-mail is intended only for the use of the individual or entity to which it is addressed, and may contain
information that is privileged, confidential and/or otherwise protected from disclosure to anyone
other than its intended recipient. Unintended transmission shall not constitute waiver of any
privilege or confidentiality obligation. If you received this communication in error, please do not
review, copy or distribute it, notify me immediately by email, and delete the original message and any
attachments. Unless expressly stated in this e-mail, nothing in this message or any attachment should be
construed as a digital or electronic signature.

-----Original Message-----
From: squid-users [mailto:squid-users-bounces <at>] On Behalf Of squid-users-request <at>
Sent: Tuesday, August 18, 2015 1:00 PM
To: squid-users <at>
Subject: squid-users Digest, Vol 12, Issue 33

Send squid-users mailing list submissions to
       squid-users <at>

To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to
       squid-users-request <at>

You can reach the person managing the list at
       squid-users-owner <at>

When replying, please edit your Subject line so it is more specific than "Re: Contents of squid-users digest..."

Today's Topics:

  1. Re: Question on developing customized Cache Selection
     algorithm from Round Robin, Least Load (Amos Jeffries)


Message: 1
Date: Tue, 18 Aug 2015 22:24:45 +1200
From: Amos Jeffries <squid3 <at>>
To: squid-users <at>
Subject: Re: [squid-users] Question on developing customized Cache
       Selection algorithm from Round Robin, Least Load
Message-ID: <55D307ED.3030500 <at>>
Content-Type: text/plain; charset=utf-8

On 18/08/2015 5:42 a.m., Du, Hongfei wrote:
> Hello
> We are in an attempt to extend Squid Cache selection algorithm to make
a more sophisticated, let’s say to add WRR or WFQ, a few questions to start with:

Like Eliezer said this is really a question for squid-dev mailing list where the developers hang out.

WRR (weighted round-robin) is already implemented and exactly how Squid cache_dir currently operate.
The weighting is based on storage area available size and I/O loading.

WFQ (weighted fair queueing) is a queueing algorthm as the 'Q' says.
Caching != queueing. In fact a cache is so different from a queue that WFQ would badly affect performance if
it were used to decide what storage an object went into.
In essence, the problem is that we cannot dictate what objects will be requested by clients. They want what
they ask for. Squids duty is 1) to answer reliably and 2) fast as possible regardless of objects location.

> - As we probably has to rewrite new algorithm and recompile it, so
does anyone know where(or which file) is the existing Round Robin or Least Load algorithm defined in source codes?

That depends on whether you mean the algorithm applied for local storage vs network sources, or the one(s)
applied to individual caches for garbage collection.

> - Is there straight forward method to tell/instruct squid to store
content from network(e.g. an URL) in a predefined specific disk folder rather than using the selection
algorithm itself?

Simply stated:
The URL and all other relevant details from the transaction are hashed to lookup an index to find the 32-bit
'sfileno' value which is a UID encoding the location of indexed objects in Squid local storage.

It _sounds_ simple enough, but those "other relevant details" is a massive complication. One single URL
can potentially contain all possible objects that ever have or ever will exist on the Internet. Even
considering storing things one file per URL dies a horrible death when it encounters popular modern websites.

Within Squid we refer to "the HTTP cache" as a single thing. But it is constructed of many storage areas. The
individual cache_dirs and other places where HTTP objects might be found. Remote network sources are
also accounted for.

There is algorithm(s) applied in layers to decide which type of storage area is use, then which one within
the selected type is most appropriate. Based on object availability, cacheability, size, storage area
speed, object popularity, and temporal relationships to others.
Then a sfileno is assigned if its local storage.

Then objects get moved between storage areas anyway based on need and popularity. And objects get removed
from invdividual storage areas based on lack of popularity. Both of which affect future requests for them.

So the particulars of what you want to do matter, a lot.

FWIW, we have known outstanding needs for:

* updated cache_peer selection algorithms. Current Squid outgoing TCP connection failover works with a
list of IPs that get tried until one succeeds. The old selection algorithms produce only a single IP rather
than a preference-ordered set of peers to try.
- also none of the algorithms provide byte-base loading.

* ETag based cache index. For better performant If-Match/If-None-Match revalidation traffic.

* 206 partial object caching. Rock can store them, but no algorithms yet exist to properly manage the pieces
of incomplete objects or aggregation from different transactions.

* per-area storage indexes, instead of a Big Global Index. Working towards 64-bit sfileno which are needed
for some TB sized caches. Rock and Transients storage areas are done, but other caches still TODO.

* better HDD load detection. To inform the weighting of cache_dir seectio algorithms. This is a hardware
driver related project.

* Support for ZFS and XFS dynamic inode sizing. This causes lots of issues with "wrong" disk storage
under/over usage. Another hardware driver related project.



Subject: Digest Footer

squid-users mailing list
squid-users <at>


End of squid-users Digest, Vol 12, Issue 33
squid-users mailing list
squid-users <at>
Jason Haar | 19 Aug 04:20 2015

can't get bump to work anymore on 3.5.7?

Hi there

I've had bump working before (testing), but went off to different things
for a while, but now I'm back and can't get it to work anymore. I've
upgraded to 3.5.7 (from some previous release - maybe 3.5.4?), so it may
be something that happened in there

I've stripped back my config in order to maximize getting bumping to
work, and is probably best described by:

root]# egrep -i 'crtd|bump|ssl:' squid.conf|grep -v '#'
squid.conf:http_port 3128 ssl-bump cert=/etc/squid/squidCA.cert 
generate-host-certificates=on dynamic_cert_mem_cache_size=256MB options=ALL
squid.conf:https_port 3129 intercept ssl-bump
cert=/etc/squid/squidCA.cert  generate-host-certificates=on
dynamic_cert_mem_cache_size=256MB options=ALL
squid.conf:include /etc/squid/
squid.conf:logformat logdetailed %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm
%ru %[un %Sh/%<a %mt %ssl::>sni %ssl::>cert_subject /usr/lib64/squid/ssl_crtd -s
/var/lib/squid/ssl_db -M 256MB 32 startup=15 idle=5 peek all bump all

I interpret that as peek at all traffic, then bump all. And that bumping
will involve create new certs signed by squidCA.cert and stored under

However, on an empty system, "curl -vi -xlocalhost:3128" shows a SSL session that *doesn't* involve
squidCA - and indeed there are no changes made under 
/var/lib/squid/ssl_db (yes the files/dirs exists and perms are correct).

ie no matter what https website I go to, they are all spliced -
exclusively "TCP_TUNNEL/200" in the logs

I cranked up debug_options and saw this

2015/08/19 14:13:16.493 kid1| parseV3Hello: Found server
2015/08/19 14:13:16.493 kid1| parseV3Hello: TLS Extension:
ff01 of size:1
2015/08/19 14:13:16.493 kid1| parseV3Hello: TLS Extension:
d of size:16
2015/08/19 14:13:16.493 kid1| read: Hold flag is set, retry
latter. (Hold 11bytes)
2015/08/19 14:13:16.493 kid1| stateChanged: FD 24 now:
0x2002 23RCHA (SSLv2/v3 read client hello A)
2015/08/19 14:13:16.493 kid1| SetSelect: FD 24, type=1,
handler=1, client_data=0x3d9b8f8, timeout=0
2015/08/19 14:13:16.493 kid1|
clientPeekAndSpliceSSL: SSL_accept failed.

I recall hearing that some new code has been introduced that helps squid
"magically" figure out whether to even bother bumping some traffic
types? Is this related? It smells like squid has already decided to not
bump: based on it's own logic more than the config? (ie is my config
correct - but irrelevant)

This is squid-3.5.7 on Fedora-22



Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

squid-users mailing list
squid-users <at>