Jonathan Filogna | 13 May 15:15 2016

Squid: Caching Google Drive Files?

Hello all. Here's a question: Can Squid do a caché for some files storaged on googledrive with ssl intercept?

--
Jonathan Filogna
SysAdmin
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
asakura | 13 May 14:26 2016
Picon

authenticate_ip_ttl does not work

Hello,

Thank you always for your kind support.

I testing squid-3.5.19 "max_user_ip/authenticate_ip_ttl" feature.
but, access control not work well.
(Value of authenticate_ip_ttl is not enable)

I investigating, and tried to change as follows.

src/auth/User.cc
----
# diff User.cc.org User.cc
287c287
<             ipdata->ip_expiretime = squid_curtime;
---
>             ipdata->ip_expiretime = squid_curtime + ::Config.authenticateIpTTL;
----

Is this would be correct change?

Sorry my poor English.

regards,
Kazuhiro
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Reet Vyas | 13 May 07:58 2016
Picon

Squid Peek and splice

Hi Amos/Yuri,

Currently my squid is configured with ssl bump, now I want to use peek and splice. I read in some forum that we don't need to install certificate on client's machine.

As I have already asked before in mailing list to install SSL certificate on Android devices, which is not working.

So my question is If I want to use peek and splice for example I want https filtering for  proxy websites  and I dont want ssl for bank websites and facebook youtube and gmail. how will it work? Do i need to install SSL certifcate on client or not, I am bit confused with peek and splice thing.

Please let me know is that possible to configure squid 3.5.19 in such a way so that it will bump  only proxy websites not FB youtube etc.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
David Touzeau | 13 May 00:04 2016

ACL is used in context without an HTTP response. Assuming mismatch

Hi

 

I did not want squid to log it’s TCP_DENIED/407 when sending authentication to browsers

 

I think this acl should work

 

acl CODE_TCP_DENIED http_status 407

access_log none CODE_TCP_DENIED

 

But squid claim :

 

2016/05/12 23:44:07 kid1| WARNING: CODE_TCP_DENIED ACL is used in context without an HTTP response. Assuming mismatch.

 

Why this rule is wrong ?

 

Best regards

 

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Walter H. | 12 May 22:20 2016

Regular expressions with dstdom_regex ACL

Hello,

can someone please tell me which regular expression(s) would really block
domains which are IP hosts

for IPv4 this is my regexp:
^[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}$
and this works as expected

acl block_domains_iphost dstdom_regex 
^[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}\.[12]?[0-9]{1,2}$
deny_info ERR_IPHOST_BLOCKED block_domains_iphost
http_access deny block_domains_iphost

BUT, I tried and tried and failed with IPv6

this section in squid.conf

acl block_domains_ip6host dstdomain [ipv6]
deny_info ERR_IPHOST_BLOCKED block_domains_iphost6
http_access deny block_domains_iphost6

doesn't work for exact this given IPv6 address ...

I want any IPv6 address

can someone please tell me how I can achive this?

the result should be that
any URL like this
http(s)://ip-address/ should be blocked by the specified error page

Thanks and Greetings from Austria,
Walter

Attachment (smime.p7s): application/pkcs7-signature, 5831 bytes
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
derk@muenchhausen.de | 12 May 21:43 2016
Picon
Gravatar

squid, squidguard and elk - simply combined as docker containers

Dear Squid enthusiasts !

Squid is fine – it simply works since years at home. 
Squidguard helps me to block malicious websites. 
kibana visualizes from where my Browser retrieves data 
… and Docker combines everything in a simple way :)

I published a small Docker Compose project on Github. Feel free to try it – feedback is very welcome!

Best regards,
Derk


 
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Nilesh Gavali | 12 May 19:27 2016

Windows Squid with AD authentication

Hello yuri;
I haven't tried it as didn't know from where to start, So need some documentation to start with , Squid on Widnows to be integrated with AD authentication..

Thanks & Regards
Nilesh Suresh Gavali




From:        squid-users-request <at> lists.squid-cache.org
To:        squid-users <at> lists.squid-cache.org
Date:        12/05/2016 17:55
Subject:        squid-users Digest, Vol 21, Issue 56
Sent by:        "squid-users" <squid-users-bounces <at> lists.squid-cache.org>



Send squid-users mailing list submissions to
                squid-users <at> lists.squid-cache.org

To subscribe or unsubscribe via the World Wide Web, visit
                http://lists.squid-cache.org/listinfo/squid-users
or, via email, send a message with subject or body 'help' to
                squid-users-request <at> lists.squid-cache.org

You can reach the person managing the list at
                squid-users-owner <at> lists.squid-cache.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of squid-users digest..."


Today's Topics:

  1. Re: squid-users Digest, Vol 21, Issue 54 (Yuri Voinov)


----------------------------------------------------------------------

Message: 1
Date: Thu, 12 May 2016 22:55:47 +0600
From: Yuri Voinov <yvoinov <at> gmail.com>
To: squid-users <at> lists.squid-cache.org
Subject: Re: [squid-users] squid-users Digest, Vol 21, Issue 54
Message-ID: <27d6af04-7c67-0b8e-968f-2b3e7828200c <at> gmail.com>
Content-Type: text/plain; charset="utf-8"


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Condolences. Windows is not the most common platform for Squid.

But personally I do not see a fundamental difference in the
implementation of authentication with AD on Windows or Unix. You have
already tried something to do or so, looking ready-to-use configuration?


12.05.16 23:15, Nilesh Gavali пишет:
> Hello Antony;
> we have Squid 3.5 on Windows 2012 R2 OS & for which I need to
integrate squid with AD. I search online but all of the link are based
on linux platform squid.
> I am looking for squid running on Windows Platform which need to
integrate with AD authentication.
>
> Thanks & Regards
> Nilesh Suresh Gavali
>
>
>
> From:        squid-users-request <at> lists.squid-cache.org
> To:        squid-users <at> lists.squid-cache.org
> Date:        12/05/2016 17:33
> Subject:        squid-users Digest, Vol 21, Issue 54
> Sent by:        "squid-users" <squid-users-bounces <at> lists.squid-cache.org>
> -------------------------
>
>
>
> Send squid-users mailing list submissions to
>                 squid-users <at> lists.squid-cache.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>                 http://lists.squid-cache.org/listinfo/squid-users
> or, via email, send a message with subject or body 'help' to
>                 squid-users-request <at> lists.squid-cache.org
>
> You can reach the person managing the list at
>                 squid-users-owner <at> lists.squid-cache.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of squid-users digest..."
>
>
> Today's Topics:
>
>   1. Re: Problems configuring Squid with C-ICAP+Squidclamav
>      (SOLVED) (Amos Jeffries)
>   2. Re: Linking with *SSL (Spil Oss)
>   3. Re: Getting the full file content on a range
request,                 but not
>      on EVERY get ... (Hans-Peter Jansen)
>   4. Windows Squid with AD authentication (Nilesh Gavali)
>   5. Re: Getting the full file content on a range request, but not
>      on EVERY get ... (Heiler Bemerguy)
>   6. Re: Windows Squid with AD authentication (Antony Stone)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 13 May 2016 00:00:05 +1200
> From: Amos Jeffries <squid3 <at> treenet.co.nz>
> To: squid-users <at> lists.squid-cache.org
> Subject: Re: [squid-users] Problems configuring Squid with
>                 C-ICAP+Squidclamav (SOLVED)
> Message-ID: <dc535419-e24f-b6ee-00ac-45970ec67304 <at> treenet.co.nz>
> Content-Type: text/plain; charset=utf-8
>
> On 12/05/2016 11:13 p.m., C. L. Martinez wrote:
> >
> > But when squid sents an OPTIONS request to ICAP, why works when I
use 127.0.0.1 and not localhost?? Maybe it is a problem with openbsd's
package ...
> >
>
> It is quite possible. 127.0.0.1 is not the only address modern computers
> use for localhost. Double check what your hosts file contains.
>
> Amos
>
>
>
> ------------------------------
>
> Message: 2
> Date: Thu, 12 May 2016 15:33:30 +0200
> From: Spil Oss <spil.oss <at> gmail.com>
> To: squid-users <at> lists.squid-cache.org, timp87 <at> gmail.com
> Subject: Re: [squid-users] Linking with *SSL
> Message-ID:
>                
<CAEJyAvM8O6uVCgSipvzXAK1OsUrH3izc7BVTgaS0kPkWmAn3BQ <at> mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> > Hi!
> > When we worked on squid port on FreeBSD one of the FreeBSD user
> > (Bernard Spil) noticed:
> >
> > When working on this, I ran into another issue. Perhaps maintainer can
> > fix that with upstream. I've now added LIBOPENSSL_LIBS="-lcrypto
> > -lssl" because of configure failing in configure.ac line 1348.
> >
> > > AC_CHECK_LIB(ssl,[SSL_library_init],[LIBOPENSSL_LIBS="-lssl
$LIBOPENSSL_LIBS"],[AC_MSG_ERROR([library 'ssl' is required for OpenSSL])
> >
> > You cannot link against libssl when not linking libcrypto as well
> > leading to an error with LibreSSL. This check should add -lcrypto in
> > addition to -lssl to pass.
> >
> > Is this something someone could take a look at?
>
> Hi All,
>
> Sorry for replying out-of-thread.
>
> What happens is that the check for SSL_library_init fails as -lcrypto
> is missing.
>
> Output from configure
>
> > checking for CRYPTO_new_ex_data in -lcrypto... yes
> > checking for SSL_library_init in -lssl... no
> > configure: error: library 'ssl' is required for OpenSSL
> > ===>  Script "configure" failed unexpectedly.
>
> What I usually see in autoconf scripts is that temp CFLAGS etc are set
> before the test for SSL libs and reversed after the test.
>
> Adding LIBOPENSSL_LIBS="-lcrypto -lssl" to configure works as well
>
> Would be great if you can fix this!
>
> Thanks,
>
> Bernard Spil.
> https://wiki.freebsd.org/BernardSpil
> https://wiki.freebsd.org/LibreSSL
> https://wiki.freebsd.org/OpenSSL
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 12 May 2016 16:06:40 +0200
> From: Hans-Peter Jansen <hpj <at> urpla.net>
> To: squid-users <at> lists.squid-cache.org
> Subject: Re: [squid-users] Getting the full file content on a range
>                 request,                 but not on EVERY get ...
> Message-ID: <2575073.4c7f0552JP <at> xrated>
> Content-Type: text/plain; charset="us-ascii"
>
> On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> > Hey guys,
> >
> > First take a look at the log:
> >
> > root <at> proxy:/var/log/squid# tail -f access.log |grep
> >
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt->
BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
> > 300520 GET
> [...]
> > Now think: An user is just doing a segmented/ranged download, right?
> > Squid won't cache the file because it is a range-download, not a full
> > file download.
> > But I WANT squid to cache it. So I decide to use "range_offset_limit
> > -1", but then on every GET squid will re-download the file from the
> > beginning, opening LOTs of simultaneous connections and using too much
> > bandwidth, doing just the OPPOSITE it's meant to!
> >
> > Is there a smart way to allow squid to download it from the beginning to
> > the end (to actually cache it), but only on the FIRST request/get? Even
> > if it makes the user wait for the full download, or cancel it
> > temporarily, or.. whatever!! Anything!!
>
> Well, this is exactly, what my squid_dedup helper was created for!
>
> See my announcement:
>
>                 Subject: [squid-users] New StoreID helper: squid_dedup
>                 Date: Mon, 09 May 2016 23:56:45 +0200
>
> My openSUSE environment is fetching _all_ updates with byte-ranges
from many
> servers. Therefor, I created squid_dedup.
>
> Your specific config could look like this:
>
> /etc/squid/dedup/mozilla.conf:
> [mozilla]
> match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
> replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
<http://download.cdn.mozilla.net.%(intdomain)s//1>
> fetch: true
>
> The fetch parameter is unique among the other StoreID helper (AFAIK):
it is
> fetching the object after a certain delay with a pool of fetcher threads.
>
> The idea is: after the first access for an object, wait a bit (global
setting,
> default: 15 secs), and then fetch the whole thing once. It won't solve
> anything for the first client, but for all subsequent accesses.
>
> The fetcher avoids fetching anything more than once by checking the http
> headers.
>
> This is a pretty new project, but be assured, that the basic functions are
> working fine, and I will do my best to solve any upcoming issues. It is
> implemented with Python3 and prepared for supporting additional features
> easily, while keeping a good part of an eye on efficiency.
>
> Let me know, if you're going to try it.
>
> Pete
>
>
> ------------------------------
>
> Message: 4
> Date: Thu, 12 May 2016 17:46:36 +0100
> From: Nilesh Gavali <nilesh.gavali <at> tcs.com>
> To: squid-users <at> lists.squid-cache.org
> Subject: [squid-users] Windows Squid with AD authentication
> Message-ID:
>                
<OFC3392A46.462F0184-ON80257FB1.00598D57-80257FB1.0059AB8F <at> tcs.com>
> Content-Type: text/plain; charset="utf-8"
>
> Team;
> we have squid running on Windows and need to integrate it with Windows AD
> .can anyone help me with steps to be perform to get this done.
>
> Thanks & Regards
> Nilesh Suresh Gavali
> =====-----=====-----=====
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/327a38cb/attachment-0001.html>
>
> ------------------------------
>
> Message: 5
> Date: Thu, 12 May 2016 13:28:00 -0300
> From: Heiler Bemerguy <heiler.bemerguy <at> cinbesa.com.br>
> To: squid-users <at> lists.squid-cache.org
> Subject: Re: [squid-users] Getting the full file content on a range
>                 request, but not on EVERY get ...
> Message-ID: <61bf3ff3-c8b2-647f-9b5e-3112b2f43d6c <at> cinbesa.com.br>
> Content-Type: text/plain; charset="utf-8"; Format="flowed"
>
>
> Hi Pete, thanks for replying... let me see if I got it right..
>
> Will I need to specify every url/domain I want it to act on ? I want
> squid to do it for every range-request downloads that should/would be
> cached (based on other rules, pattern_refreshs etc)
>
> It doesn't need to delay any downloads as long as it isn't a dupe of
> what's already being downloaded.....
>
>
> Best Regards,
>
>
> --
> Heiler Bemerguy - (91) 98151-4894
> Assessor Técnico - CINBESA (91) 3184-1751
>
>
> Em 12/05/2016 11:06, Hans-Peter Jansen escreveu:
> > On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> >> Hey guys,
> >>
> >> First take a look at the log:
> >>
> >> root <at> proxy:/var/log/squid# tail -f access.log |grep
> >>
http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt->
BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
> >> 300520 GET
> > [...]
> >> Now think: An user is just doing a segmented/ranged download, right?
> >> Squid won't cache the file because it is a range-download, not a full
> >> file download.
> >> But I WANT squid to cache it. So I decide to use "range_offset_limit
> >> -1", but then on every GET squid will re-download the file from the
> >> beginning, opening LOTs of simultaneous connections and using too much
> >> bandwidth, doing just the OPPOSITE it's meant to!
> >>
> >> Is there a smart way to allow squid to download it from the
beginning to
> >> the end (to actually cache it), but only on the FIRST request/get? Even
> >> if it makes the user wait for the full download, or cancel it
> >> temporarily, or.. whatever!! Anything!!
> > Well, this is exactly, what my squid_dedup helper was created for!
> >
> > See my announcement:
> >
> >                  Subject: [squid-users] New StoreID helper: squid_dedup
> >                  Date: Mon, 09 May 2016 23:56:45 +0200
> >
> > My openSUSE environment is fetching _all_ updates with byte-ranges
from many
> > servers. Therefor, I created squid_dedup.
> >
> > Your specific config could look like this:
> >
> > /etc/squid/dedup/mozilla.conf:
> > [mozilla]
> > match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
> > replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
<http://download.cdn.mozilla.net.%(intdomain)s//1>
> > fetch: true
> >
> > The fetch parameter is unique among the other StoreID helper
(AFAIK): it is
> > fetching the object after a certain delay with a pool of fetcher
threads.
> >
> > The idea is: after the first access for an object, wait a bit
(global setting,
> > default: 15 secs), and then fetch the whole thing once. It won't solve
> > anything for the first client, but for all subsequent accesses.
> >
> > The fetcher avoids fetching anything more than once by checking the http
> > headers.
> >
> > This is a pretty new project, but be assured, that the basic
functions are
> > working fine, and I will do my best to solve any upcoming issues. It is
> > implemented with Python3 and prepared for supporting additional features
> > easily, while keeping a good part of an eye on efficiency.
> >
> > Let me know, if you're going to try it.
> >
> > Pete
> > _______________________________________________
> > squid-users mailing list
> > squid-users <at> lists.squid-cache.org
> > http://lists.squid-cache.org/listinfo/squid-users
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
<http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/44b7d9df/attachment-0001.html>
>
> ------------------------------
>
> Message: 6
> Date: Thu, 12 May 2016 18:34:08 +0200
> From: Antony Stone <Antony.Stone <at> squid.open.source.it>
> To: squid-users <at> lists.squid-cache.org
> Subject: Re: [squid-users] Windows Squid with AD authentication
> Message-ID: <201605121834.08490.Antony.Stone <at> squid.open.source.it>
> Content-Type: Text/Plain;  charset="iso-8859-15"
>
> On Thursday 12 May 2016 at 18:46:36, Nilesh Gavali wrote:
>
> > Team;
> > we have squid running on Windows and need to integrate it with
Windows AD
> > .can anyone help me with steps to be perform to get this done.
>
> This specific question has appeared a few times on this list only
recently.
>
> Have you so far:
>
> - searched the list archives for likely answers to your question?
>
> http://lists.squid-cache.org/pipermail/squid-users/
>
> - consulted the Squid documentation for guidance?
>
> http://www.squid-cache.org/Doc/
>
> - looked for any independent HOWTOs etc which show how people have
done this
> in the past?
>
> http://www.google.com/search?q=squid+active+directory+authentication
>
>
> Here's some friendly advice:
>
> 1. The more information you give us (such as: which version of Squid
are you
> using, which version of Windows are you running under, which form of
> authentication are you using?), the easier it is for people here to help.
>
> 2. If you have tried something already and run into problems, tell us
what you
> have tried and what problems (log file extracts, complete client error
message,
> etc) you encountered, so we can offer specific suggestions.
>
> 3. If you haven't yet tried to implement anything, at least let us
know what
> documentation you have looked up and what problems you encountered when
> following it, so we can try to fill in the gaps.
>
>
> Regards,
>
>
> Antony.
>
> --
> Most people have more than the average number of legs.
>
>                                                   Please reply to the
list;
>                                                         please *don't*
CC me.
>
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> squid-users mailing list
> squid-users <at> lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
>
>
> ------------------------------
>
> End of squid-users Digest, Vol 21, Issue 54
> *******************************************
>
>
>
> _______________________________________________
> squid-users mailing list
> squid-users <at> lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQEcBAEBCAAGBQJXNLWTAAoJENNXIZxhPexGLs4IAIxzjIvko7Qcgr5sYPuSOl16
fpMti5wfA6jj5J+F3YuobYdzHIp20U08gNR4hWm/9cE1NEfOi1x08m87MEFyb/Nf
Ix2/S1Hfa34HCaEZpJbouk/27Ym5sgTIOF4x19IhJTaEiTiHKV4jq92uxvHZ1vNv
4R539OludR3iVDERhcIo8CeKh2KwfIMxLw/mlpZPMcm0+HdNV19tqUdsVyrWlmJ/
H6FIHsLEXTFM7Z4RlDDnaaRCI5pZJKikD87LjEkOe5a93e7IejpYM8yGKsDtk+zV
tnZr4vloal/xRC9LqRcrZi6EtEz1eB4DhMEjtJMwx59mbjNoJLxJaHocBj+ce/0=
=+Y8y
-----END PGP SIGNATURE-----

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/db5c07e2/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: 0x613DEC46.asc
Type: application/pgp-keys
Size: 2437 bytes
Desc: not available
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/db5c07e2/attachment.key>

------------------------------

Subject: Digest Footer

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


------------------------------

End of squid-users Digest, Vol 21, Issue 56
*******************************************

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Nilesh Gavali | 12 May 19:15 2016

Re: squid-users Digest, Vol 21, Issue 54

Hello Antony;
we have Squid 3.5 on Windows 2012 R2 OS & for which I need to integrate squid with AD. I search online but all of the link are based on linux platform squid.
I am looking for squid running on Windows Platform which need to integrate with AD authentication.

Thanks & Regards
Nilesh Suresh Gavali



From:        squid-users-request <at> lists.squid-cache.org
To:        squid-users <at> lists.squid-cache.org
Date:        12/05/2016 17:33
Subject:        squid-users Digest, Vol 21, Issue 54
Sent by:        "squid-users" <squid-users-bounces <at> lists.squid-cache.org>



Send squid-users mailing list submissions to
                squid-users <at> lists.squid-cache.org

To subscribe or unsubscribe via the World Wide Web, visit
                http://lists.squid-cache.org/listinfo/squid-users
or, via email, send a message with subject or body 'help' to
                squid-users-request <at> lists.squid-cache.org

You can reach the person managing the list at
                squid-users-owner <at> lists.squid-cache.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of squid-users digest..."


Today's Topics:

  1. Re: Problems configuring Squid with C-ICAP+Squidclamav
     (SOLVED) (Amos Jeffries)
  2. Re: Linking with *SSL (Spil Oss)
  3. Re: Getting the full file content on a range request,                 but not
     on EVERY get ... (Hans-Peter Jansen)
  4. Windows Squid with AD authentication (Nilesh Gavali)
  5. Re: Getting the full file content on a range request, but not
     on EVERY get ... (Heiler Bemerguy)
  6. Re: Windows Squid with AD authentication (Antony Stone)


----------------------------------------------------------------------

Message: 1
Date: Fri, 13 May 2016 00:00:05 +1200
From: Amos Jeffries <squid3 <at> treenet.co.nz>
To: squid-users <at> lists.squid-cache.org
Subject: Re: [squid-users] Problems configuring Squid with
                C-ICAP+Squidclamav (SOLVED)
Message-ID: <dc535419-e24f-b6ee-00ac-45970ec67304 <at> treenet.co.nz>
Content-Type: text/plain; charset=utf-8

On 12/05/2016 11:13 p.m., C. L. Martinez wrote:
>
> But when squid sents an OPTIONS request to ICAP, why works when I use 127.0.0.1 and not localhost?? Maybe it is a problem with openbsd's package ...
>

It is quite possible. 127.0.0.1 is not the only address modern computers
use for localhost. Double check what your hosts file contains.

Amos



------------------------------

Message: 2
Date: Thu, 12 May 2016 15:33:30 +0200
From: Spil Oss <spil.oss <at> gmail.com>
To: squid-users <at> lists.squid-cache.org, timp87 <at> gmail.com
Subject: Re: [squid-users] Linking with *SSL
Message-ID:
                <CAEJyAvM8O6uVCgSipvzXAK1OsUrH3izc7BVTgaS0kPkWmAn3BQ <at> mail.gmail.com>
Content-Type: text/plain; charset=UTF-8

> Hi!
> When we worked on squid port on FreeBSD one of the FreeBSD user
> (Bernard Spil) noticed:
>
> When working on this, I ran into another issue. Perhaps maintainer can
> fix that with upstream. I've now added LIBOPENSSL_LIBS="-lcrypto
> -lssl" because of configure failing in configure.ac line 1348.
>
> > AC_CHECK_LIB(ssl,[SSL_library_init],[LIBOPENSSL_LIBS="-lssl $LIBOPENSSL_LIBS"],[AC_MSG_ERROR([library 'ssl' is required for OpenSSL])
>
> You cannot link against libssl when not linking libcrypto as well
> leading to an error with LibreSSL. This check should add -lcrypto in
> addition to -lssl to pass.
>
> Is this something someone could take a look at?

Hi All,

Sorry for replying out-of-thread.

What happens is that the check for SSL_library_init fails as -lcrypto
is missing.

Output from configure

> checking for CRYPTO_new_ex_data in -lcrypto... yes
> checking for SSL_library_init in -lssl... no
> configure: error: library 'ssl' is required for OpenSSL
> ===>  Script "configure" failed unexpectedly.

What I usually see in autoconf scripts is that temp CFLAGS etc are set
before the test for SSL libs and reversed after the test.

Adding LIBOPENSSL_LIBS="-lcrypto -lssl" to configure works as well

Would be great if you can fix this!

Thanks,

Bernard Spil.
https://wiki.freebsd.org/BernardSpil
https://wiki.freebsd.org/LibreSSL
https://wiki.freebsd.org/OpenSSL


------------------------------

Message: 3
Date: Thu, 12 May 2016 16:06:40 +0200
From: Hans-Peter Jansen <hpj <at> urpla.net>
To: squid-users <at> lists.squid-cache.org
Subject: Re: [squid-users] Getting the full file content on a range
                request,                 but not on EVERY get ...
Message-ID: <2575073.4c7f0552JP <at> xrated>
Content-Type: text/plain; charset="us-ascii"

On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
> Hey guys,
>
> First take a look at the log:
>
> root <at> proxy:/var/log/squid# tail -f access.log |grep
> http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-> BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
> 300520 GET
[...]
> Now think: An user is just doing a segmented/ranged download, right?
> Squid won't cache the file because it is a range-download, not a full
> file download.
> But I WANT squid to cache it. So I decide to use "range_offset_limit
> -1", but then on every GET squid will re-download the file from the
> beginning, opening LOTs of simultaneous connections and using too much
> bandwidth, doing just the OPPOSITE it's meant to!
>
> Is there a smart way to allow squid to download it from the beginning to
> the end (to actually cache it), but only on the FIRST request/get? Even
> if it makes the user wait for the full download, or cancel it
> temporarily, or.. whatever!! Anything!!

Well, this is exactly, what my squid_dedup helper was created for!

See my announcement:

                Subject: [squid-users] New StoreID helper: squid_dedup
                Date: Mon, 09 May 2016 23:56:45 +0200

My openSUSE environment is fetching _all_ updates with byte-ranges from many
servers. Therefor, I created squid_dedup.

Your specific config could look like this:

/etc/squid/dedup/mozilla.conf:
[mozilla]
match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
fetch: true

The fetch parameter is unique among the other StoreID helper (AFAIK): it is
fetching the object after a certain delay with a pool of fetcher threads.

The idea is: after the first access for an object, wait a bit (global setting,
default: 15 secs), and then fetch the whole thing once. It won't solve
anything for the first client, but for all subsequent accesses.

The fetcher avoids fetching anything more than once by checking the http
headers.

This is a pretty new project, but be assured, that the basic functions are
working fine, and I will do my best to solve any upcoming issues. It is
implemented with Python3 and prepared for supporting additional features
easily, while keeping a good part of an eye on efficiency.

Let me know, if you're going to try it.

Pete


------------------------------

Message: 4
Date: Thu, 12 May 2016 17:46:36 +0100
From: Nilesh Gavali <nilesh.gavali <at> tcs.com>
To: squid-users <at> lists.squid-cache.org
Subject: [squid-users] Windows Squid with AD authentication
Message-ID:
                <OFC3392A46.462F0184-ON80257FB1.00598D57-80257FB1.0059AB8F <at> tcs.com>
Content-Type: text/plain; charset="utf-8"

Team;
we have squid running on Windows and need to integrate it with Windows AD
.can anyone help me with steps to be perform to get this done.

Thanks & Regards
Nilesh Suresh Gavali
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/327a38cb/attachment-0001.html>

------------------------------

Message: 5
Date: Thu, 12 May 2016 13:28:00 -0300
From: Heiler Bemerguy <heiler.bemerguy <at> cinbesa.com.br>
To: squid-users <at> lists.squid-cache.org
Subject: Re: [squid-users] Getting the full file content on a range
                request, but not on EVERY get ...
Message-ID: <61bf3ff3-c8b2-647f-9b5e-3112b2f43d6c <at> cinbesa.com.br>
Content-Type: text/plain; charset="utf-8"; Format="flowed"


Hi Pete, thanks for replying... let me see if I got it right..

Will I need to specify every url/domain I want it to act on ? I want
squid to do it for every range-request downloads that should/would be
cached (based on other rules, pattern_refreshs etc)

It doesn't need to delay any downloads as long as it isn't a dupe of
what's already being downloaded.....


Best Regards,


--
Heiler Bemerguy - (91) 98151-4894
Assessor Técnico - CINBESA (91) 3184-1751


Em 12/05/2016 11:06, Hans-Peter Jansen escreveu:
> On Mittwoch, 11. Mai 2016 21:37:17 Heiler Bemerguy wrote:
>> Hey guys,
>>
>> First take a look at the log:
>>
>> root <at> proxy:/var/log/squid# tail -f access.log |grep
>> http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-> BR/firefox-45.0.1.complete.mar 1463011781.572   8776 10.1.3.236 TCP_MISS/206
>> 300520 GET
> [...]
>> Now think: An user is just doing a segmented/ranged download, right?
>> Squid won't cache the file because it is a range-download, not a full
>> file download.
>> But I WANT squid to cache it. So I decide to use "range_offset_limit
>> -1", but then on every GET squid will re-download the file from the
>> beginning, opening LOTs of simultaneous connections and using too much
>> bandwidth, doing just the OPPOSITE it's meant to!
>>
>> Is there a smart way to allow squid to download it from the beginning to
>> the end (to actually cache it), but only on the FIRST request/get? Even
>> if it makes the user wait for the full download, or cancel it
>> temporarily, or.. whatever!! Anything!!
> Well, this is exactly, what my squid_dedup helper was created for!
>
> See my announcement:
>
>                  Subject: [squid-users] New StoreID helper: squid_dedup
>                  Date: Mon, 09 May 2016 23:56:45 +0200
>
> My openSUSE environment is fetching _all_ updates with byte-ranges from many
> servers. Therefor, I created squid_dedup.
>
> Your specific config could look like this:
>
> /etc/squid/dedup/mozilla.conf:
> [mozilla]
> match: http\:\/\/download\.cdn\.mozilla\.net/(.*)
> replace: http://download.cdn.mozilla.net.%(intdomain)s/\1
> fetch: true
>
> The fetch parameter is unique among the other StoreID helper (AFAIK): it is
> fetching the object after a certain delay with a pool of fetcher threads.
>
> The idea is: after the first access for an object, wait a bit (global setting,
> default: 15 secs), and then fetch the whole thing once. It won't solve
> anything for the first client, but for all subsequent accesses.
>
> The fetcher avoids fetching anything more than once by checking the http
> headers.
>
> This is a pretty new project, but be assured, that the basic functions are
> working fine, and I will do my best to solve any upcoming issues. It is
> implemented with Python3 and prepared for supporting additional features
> easily, while keeping a good part of an eye on efficiency.
>
> Let me know, if you're going to try it.
>
> Pete
> _______________________________________________
> squid-users mailing list
> squid-users <at> lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.squid-cache.org/pipermail/squid-users/attachments/20160512/44b7d9df/attachment-0001.html>

------------------------------

Message: 6
Date: Thu, 12 May 2016 18:34:08 +0200
From: Antony Stone <Antony.Stone <at> squid.open.source.it>
To: squid-users <at> lists.squid-cache.org
Subject: Re: [squid-users] Windows Squid with AD authentication
Message-ID: <201605121834.08490.Antony.Stone <at> squid.open.source.it>
Content-Type: Text/Plain;  charset="iso-8859-15"

On Thursday 12 May 2016 at 18:46:36, Nilesh Gavali wrote:

> Team;
> we have squid running on Windows and need to integrate it with Windows AD
> .can anyone help me with steps to be perform to get this done.

This specific question has appeared a few times on this list only recently.

Have you so far:

- searched the list archives for likely answers to your question?

http://lists.squid-cache.org/pipermail/squid-users/

- consulted the Squid documentation for guidance?

http://www.squid-cache.org/Doc/

- looked for any independent HOWTOs etc which show how people have done this
in the past?

http://www.google.com/search?q=squid+active+directory+authentication


Here's some friendly advice:

1. The more information you give us (such as: which version of Squid are you
using, which version of Windows are you running under, which form of
authentication are you using?), the easier it is for people here to help.

2. If you have tried something already and run into problems, tell us what you
have tried and what problems (log file extracts, complete client error message,
etc) you encountered, so we can offer specific suggestions.

3. If you haven't yet tried to implement anything, at least let us know what
documentation you have looked up and what problems you encountered when
following it, so we can try to fill in the gaps.


Regards,


Antony.

--
Most people have more than the average number of legs.

                                                  Please reply to the list;
                                                        please *don't* CC me.


------------------------------

Subject: Digest Footer

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users


------------------------------

End of squid-users Digest, Vol 21, Issue 54
*******************************************

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Nilesh Gavali | 12 May 18:46 2016

Windows Squid with AD authentication

Team;
we have squid running on Windows and need to integrate it with Windows AD .can anyone help me with steps to be perform to get this done.

Thanks & Regards
Nilesh Suresh Gavali

=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
admin | 12 May 05:00 2016
Picon

Re: Squid 4.0.10 https intercept

I create cert:

openssl req -new -newkey rsa:1024 -days 365 -nodes -x509 -keyout 
squidCA.pem -out squidCA.pem

And export it:

openssl x509 -in squidCA.pem -outform DER -out squidCA.crt

Wrong?

Amos Jeffries писал 2016-05-11 17:18:

> On 11/05/2016 11:59 p.m., admin wrote:
> 
>> I just thought! I runs the
>> 
>> openssl x509 -in squidCA.pem -outform DER -out squidCA.crt
>> 
>> import cert and now get ERR_CERT_COMMON_NAME_INVALID
>> 
>> where did I go wrong?
> 
> Hmm. I'm not sure that one is you. If it is getting past the CA trust
> check then what you did earlier was okay.
> 
> This one sounds like either the CA was generated with something for CN
> field that was not right. Or that the cert generated by Squid is broken
> in that way.
> 
> There are two reasons the Squid generated cert might be broken. In this
> order of relevance:
> 
> 1) the server the client was tryign to contact had a broken cert. Mimic
> feature in Squid will copy cert breakages so the client can make its
> security decisions on as fully accurate information as possible.
> 
> 2) a bug in Squid.
> 
> Some more research to find out what exactly is being identified as
> invalid, and where it comes from will be needed to discover whch case 
> is
> relevant.
> 
> Amos
> 
> Amos Jeffries писал 2016-05-11 16:43:
> 
> On 11/05/2016 6:35 p.m., Компания АйТи Крауд wrote:
> 
> hi!
> 
> I use squid 4.0.10 in INTERCEPT mode. If I deny some users
> (ip-addresses) with
> 
> acl users_no_inet src "/etc/squid/ip-groups/no-inet"
> http_access deny users_no_inet
> 
> ERR_ACCESS_DENIED is displayed then go to HTTP. If go to HTTPS then
> first I see browser's NET::ERR_CERT_AUTHORITY_INVALID, and then click
> "unsecure" see ERR_ACCESS_DENIED.
> 
> How to make that right display ERR_ACCESS_DENIED on HTTPS for deny user
> in Squid 4.0 ?
> What you describe above is correct behaviour. The browser does not 
> trust
> your proxy's CA.
> 
> The only way to get around the browser warning about TLS security issue
> is to install the CA used by the proxy into the browser trusted CA set.
> 
> Amos
> 
> _______________________________________________
> squid-users mailing list
> squid-users <at> lists.squid-cache.org
> http://lists.squid-cache.org/listinfo/squid-users
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Heiler Bemerguy | 12 May 02:37 2016
Picon

Getting the full file content on a range request, but not on EVERY get ...


Hey guys,

First take a look at the log:

root <at> proxy:/var/log/squid# tail -f access.log |grep http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar
1463011781.572   8776 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9 application/octet-stream
1463011851.008   9347 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream
1463011920.683   9645 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.9 application/octet-stream
1463012000.144  19154 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream
1463012072.276  12121 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream
1463012145.643  13358 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream
1463012217.472  11772 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream
1463012294.676  17148 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream
1463012370.131  15272 10.1.3.236 TCP_MISS/206 300520 GET http://download.cdn.mozilla.net/pub/firefox/releases/45.0.1/update/win32/pt-BR/firefox-45.0.1.complete.mar - HIER_DIRECT/200.216.8.32 application/octet-stream

Now think: An user is just doing a segmented/ranged download, right? Squid won't cache the file because it is a range-download, not a full file download.
But I WANT squid to cache it. So I decide to use "range_offset_limit -1", but then on every GET squid will re-download the file from the beginning, opening LOTs of simultaneous connections and using too much bandwidth, doing just the OPPOSITE it's meant to!

Is there a smart way to allow squid to download it from the beginning to the end (to actually cache it), but only on the FIRST request/get? Even if it makes the user wait for the full download, or cancel it temporarily, or.. whatever!! Anything!!

Best Regards,
-- Heiler Bemerguy - (91) 98151-4894 Assessor Técnico - CINBESA (91) 3184-1751
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Gmane