James Harper | 17 Oct 04:42 2014
Picon

website search broken

Doing a search on the main squid page gives me this:

The requested URL /cgi-bin/swish-query.cgi was not found on this server.

Maybe better doing a google search anyway?

James
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Darren Spruell | 16 Oct 21:10 2014
Picon

Supported configuration for adding origin server IP in response header

Had a use case to ask about, apologies if I missed in docs. Is there a
configuration that allows squid running as forward proxy to add a
custom response header containing the origin server IP address that
served the resource? Assuming no cache hierarchy.

In the event that the resource is served from cache, would be
interesting if squid were able to track the IP address from which the
cached resource was originally retrieved to include in responses. In
the event that's not possible, then the IP address of the cache itself
as well as an indication that the resource was served from cache
rather than an upstream origin.

Most resources seem to cover including this information in the access
log, however I'm interested in having the data in the HTTP response
for this case.

--

-- 
Darren Spruell
phatbuckett <at> gmail.com
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
daniel.rieken | 16 Oct 16:35 2014
Picon
Picon

DEAD Parent detection

Hi guys,

I got a problem with DEAD Parent detection.
I've configured 2 parents in squid.conf:

cache_peer 10.0.0.101 parent 3128 0 default name=TEST1
cache_peer 10.0.0.102 parent 3128 0 name=TEST2

So when the first parent isn't reachable, squid detects this (Detected DEAD Parent: TEST1) and is using the
second parent.
This works fine for HTTP-Traffic. But for HTTPS-Traffic the DEAD Parent detection doesn't work. Squid is
trying to send all requests to the first parent and is not switching to the second one.

What am I doing wrong? is there anything I missed? Or is it a bug?
I'm using  squid 3.1 (Debian wheezy)

Thanks for your help,
Daniel
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
apfelstrudel | 16 Oct 10:13 2014
Picon

ssl-bump doesn't decrypt https traffic - please help

Hello.
I am trying to get ssl-bump to decrypt https traffic transparently so that I could filter out adult videos from youtube and to globally enforce google safesearch on my network with diladele web safety. I also want to run dansguardian to filter http. I managed to pass https traffic transparently to squid but ssl-bump doesn't decrypt it. In logs I can see the https websites but in an encrypted form of website's.ip.address:port (45.231.21.56:443 for example) instead of https url (like https://youtube.com). That means that traffic is still encrypted and because of that, diladele can't filter https. The squid is installed on an eee pc netbook with fedora 20 installed. This machine is also my router and a network gateway. 172.16.34.254 is the ip on which the netbook "sees" the internal network, which consists of: 1 tp-link router directly connected to the eee. Thas router is connected wirelessly (Wi-Fi antenna) to the second TP-Link router (bridge) in my house. The bridge router is then connected by an ethernet cable to another router to which my devices finally (phone, tablet, pc, printer) connect. So in summary:  My device (PC, tablet, phone) ----> Router (Netgear)  ----> TP-Link Bridge Router ------> Router (TP-Link) ----> Network gateway/router (eee pc running fedora 20) with squid installed. With the current configuration dansguardian works (http), diladele web safety works (only http) and the https traffic is passed transparently through squid, but not decrypted:
 
172.16.34.253 TCP_MISS/301 848 GET http://pl-pl.facebook.com/ - HIER_DIRECT/31.13.93.97 text/html
172.16.34.254 TCP_MISS/200 50622 CONNECT 2.22.52.26:443 - HIER_DIRECT/2.22.52.26 -  <----- this should be https://pl-pl.facebook.com but ssl-bump doesn't decrypt traffic.
 
The IP addresses on the beginning of each line are different because http requests go from dansguardian internally. The https requests go directly from my internal network.
 
Here's my squid.conf:

acl localnet src 10.0.0.0/8 # RFC1918 possible internal network
acl localnet src 172.16.0.0/12 # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl to_localhost dst 127.0.0.1/8

acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports

# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager

http_access allow localnet
http_access allow localhost

# And finally deny all other access to this proxy
http_access allow all
http_access allow CONNECT
http_access allow to_localhost

include "/opt/qlproxy/etc/squid/squid.acl"

# Squid normally listens to port 3128
# Dansguardian's port:
http_port 3125
# HTTPS ports, required by diladele web safety:
http_port 3126 intercept
https_port 3127 transparent ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/opt/qlproxy/etc/myca.pem
http_port 3128 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB cert=/opt/qlproxy/etc/myca.pem
always_direct allow all
ssl_bump client-first all

#ceritiface storage manager
sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/spool/squid_ssldb -M 4MB

# Uncomment and adjust the following to add a disk cache directory.
#cache_dir ufs /var/spool/squid 100 16 256

# Leave coredumps in the first cache dir
coredump_dir /var/spool/squid

refresh_pattern ^ftp: 1440 20% 1008
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320

# Squid-Diladele integration:
icap_enable on
icap_preview_enable on
icap_preview_size 4096
icap_persistent_connections on
icap_send_client_ip on
icap_send_client_username on
icap_client_username_header X-Client-Username
icap_service qlproxy1 reqmod_precache bypass=0 icap://127.0.0.1:1344/reqmod
icap_service qlproxy2 respmod_precache bypass=0 icap://127.0.0.1:1344/respmod
acl qlproxy_icap_edomains dstdomain "/opt/qlproxy/etc/squid/icap_exclusions_domains.conf"
acl qlproxy_icap_etypes rep_mime_type "/opt/qlproxy/etc/squid/icap_exclusions_contenttypes.conf"
adaptation_access qlproxy1 deny qlproxy_icap_edomains
adaptation_access qlproxy2 deny qlproxy_icap_edomains
adaptation_access qlproxy2 deny qlproxy_icap_etypes
adaptation_access qlproxy1 allow all
adaptation_access qlproxy2 allow all
#squid shutdown faster
shutdown_lifetime 3 seconds
--------------------------------------------------
And here are my iptables:
 
*filter
:INPUT DROP [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
# ssh
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
# dansguardian
-A INPUT -i p33p1 -p tcp --dport 8080 -j ACCEPT
# squid https
-A INPUT -i p33p1 -p tcp --dport 3128 -j ACCEPT
# 3127 - for intercepted https traffic for Squid
-A INPUT -i p33p1 -p tcp --dport 3127 -j ACCEPT
# squid - allow the redirected trafiic from port 443 to 3128
-A INPUT -m mark --mark 1 -j DROP
# squid - block direct connections to port 3128
-A INPUT -i p33p1 -p tcp --dport 3128 -j REJECT
# connected streams
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
#-A INPUT -j LOG --log-prefix "DROPPED_INPUT: "
COMMIT
*nat
:OUTPUT ACCEPT [0:0]
:PREROUTING ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
# all queries go to opendsns familyshield:
-A PREROUTING -p udp -i p33p1 --dport 53 -j DNAT --to-destination 208.67.222.123:53
# redirection of internal network's http traffic to dansguardian:
-A PREROUTING -p tcp -m tcp -i p33p1 -s 172.16.34.254/32 --dport 80 -j REDIRECT --to-ports 8080
# https redirection to squid
-A PREROUTING -p tcp -m tcp -i p33p1 -s 172.16.34.254/32 --dport 443 -j REDIRECT --to-ports 3127
#NAT
-A POSTROUTING -s 172.16.34.252/30 -j MASQUERADE
COMMIT
*mangle
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:PREROUTING ACCEPT [0:0]
-A PREROUTING -p tcp -m tcp -i p33p1 --dport 3128 -j MARK --set-mark 1
-A PREROUTING -p tcp --dport 80 -s 127.0.0.1 -j ACCEPT
-A PREROUTING -p tcp --dport 80 -s 172.16.34.253 -j ACCEPT
COMMIT
# Completed
I also tried running squid with the squid -d 10 command but no errors were found:
 
2014/10/16 10:08:46 kid1| HTCP Disabled.
2014/10/16 10:08:46 kid1| Squid plugin modules loaded: 0
2014/10/16 10:08:46 kid1| Adaptation support is on
2014/10/16 10:08:46 kid1| Accepting HTTP Socket connections at local=[::]:3125 remote=[::] FD 21 flags=9
2014/10/16 10:08:46 kid1| Accepting NAT intercepted HTTP Socket connections at local=0.0.0.0:3126 remote=[::] FD 22 flags=41
2014/10/16 10:08:46 kid1| Accepting SSL bumped HTTP Socket connections at local=[::]:3128 remote=[::] FD 23 flags=9
2014/10/16 10:08:46 kid1| Accepting NAT intercepted SSL bumped HTTPS Socket connections at local=0.0.0.0:3127 remote=[::] FD 24 flags=41
2014/10/16 10:08:47 kid1| storeLateRelease: released 0 objects
How can I get squid to decrypt https traffic with this configuration? Any help will be much appreciated.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Jason Haar | 16 Oct 09:54 2014
Picon

squid-3.4.8 sslbump breaks facebook

Hi there

Weird. sslbump seems to be working well, even intercepts twitter.com
fine under FF-33 (with it's pinning support, due to
security.cert_pinning.enforcement_level=1)

However, facebook.com generates a "sec_error_inadequate_key_usage"
error. I cranked up debugging and see this. As you can see, the proxy
has ipv6 support and is actually intercepting google.com over ipv6
successfully, so I don't think it has anything to do with networking. I
can use "curl -v" to confirm it successfully downloaded the frontpage
over the same IPv6 address too. I also checked the ssl_db/certs dir and
removed the facebook certs and restarted - didn't help

If I look at the real  www.facebook.com cert, I see

            X509v3 Subject Alternative Name:
                DNS:*.facebook.com, DNS:facebook.com, DNS:*.fbsbx.com,
DNS:*.fbcdn.net, DNS:*.xx.fbcdn.net, DNS:*.xy.fbcdn.net, DNS:fb.com,
DNS:*.fb.com
            X509v3 Key Usage: critical
                Digital Signature, Key Agreement
            X509v3 Extended Key Usage:
                TLS Web Server Authentication, TLS Web Client Authentication

however, the squid-created cert, shows

            X509v3 Subject Alternative Name:
                DNS:*.facebook.com, DNS:facebook.com, DNS:*.fbsbx.com,
DNS:*.fbcdn.net, DNS:*.xx.fbcdn.net, DNS:*.xy.fbcdn.net, DNS:fb.com,
DNS:*.fb.com
            X509v3 Key Usage: critical
                .
            X509v3 Extended Key Usage:
                TLS Web Server Authentication, TLS Web Client Authentication

So squid is failing to set "X509v3 Key Usage" correctly?

Jason

1413438531.233   2192 127.0.0.1 TAG_NONE/200 0 CONNECT
www.facebook.com:443 - HIER_DIRECT/2a03:2880:20:4f06:face:b00c:0:1 -
[User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:33.0)
Gecko/20100101 Firefox/33.0\r\nProxy-Connection:
keep-alive\r\nConnection: keep-alive\r\nHost: www.facebook.com:443\r\n] []

generates the following...

2014/10/16 18:40:16.194 kid1| dns_internal.cc(1092) idnsCallback:
Merging DNS results www.facebook.com A has 2 RR, AAAA has 2 RR
2014/10/16 18:40:16.194 kid1| ipcache.cc(498) ipcacheParse:
ipcacheParse: 4 answers for 'www.facebook.com'
2014/10/16 18:40:16.194 kid1| ipcache.cc(567) ipcacheParse:
ipcacheParse: www.facebook.com #0 [2a03:2880:20:4f06:face:b00c:0:1]
2014/10/16 18:40:16.194 kid1| ipcache.cc(556) ipcacheParse:
ipcacheParse: www.facebook.com #1 173.252.74.22
2014/10/16 18:40:16.194 kid1| peer_select.cc(286) peerSelectDnsPaths:
Found sources for 'www.facebook.com:443'
2014/10/16 18:40:16.194 kid1| FwdState.cc(373) startConnectionOrFail:
www.facebook.com:443
2014/10/16 18:40:16.194 kid1| FwdState.cc(1082) connectStart:
fwdConnectStart: www.facebook.com:443
2014/10/16 18:40:16.194 kid1| pconn.cc(340) key:
PconnPool::key(local=[::] remote=[2a03:2880:20:4f06:face:b00c:0:1]:443
flags=1, www.facebook.com) is
{[2a03:2880:20:4f06:face:b00c:0:1]:443/www.facebook.com}
2014/10/16 18:40:16.194 kid1| pconn.cc(436) pop: lookup for key
{[2a03:2880:20:4f06:face:b00c:0:1]:443/www.facebook.com} failed.
2014/10/16 18:40:16.194 kid1| peer_select.cc(94) ~ps_state:
www.facebook.com:443
2014/10/16 18:40:16.194 kid1| fd.cc(221) fd_open: fd_open() FD 33
www.facebook.com
2014/10/16 18:40:16.426 kid1| FwdState.cc(1029) connectDone:
local=[2001:470:828b:0:c460:6ed8:7e00:e8f4]:52765
remote=[2a03:2880:20:4f06:face:b00c:0:1]:443 FD 33 flags=1:
'www.facebook.com:443'
2014/10/16 18:40:17.698 kid1| support.cc(260) ssl_verify_cb: SSL
Certificate signature OK: /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com
2014/10/16 18:40:17.698 kid1| support.cc(260) ssl_verify_cb: SSL
Certificate signature OK: /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com
2014/10/16 18:40:17.698 kid1| support.cc(260) ssl_verify_cb: SSL
Certificate signature OK: /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com
2014/10/16 18:40:17.698 kid1| support.cc(214) check_domain: Verifying
server domain www.facebook.com to certificate name/subjectAltName
*.facebook.com
2014/10/16 18:40:17.950 kid1| FwdState.cc(1218) dispatch:
local=127.0.0.1:3128 remote=127.0.0.1:49230 FD 24 flags=1: Fetching
'CONNECT www.facebook.com:443'
2014/10/16 18:40:17.950 kid1| FwdState.cc(433) unregister:
www.facebook.com:443
2014/10/16 18:40:17.950 kid1| FwdState.cc(458) complete:
www.facebook.com:443
2014/10/16 18:40:17.950 kid1| FwdState.cc(1355) reforward:
www.facebook.com:443?
2014/10/16 18:40:17.950 kid1| client_side.cc(4045) httpsPeeked: HTTPS
server CN: *.facebook.com bumped:
local=[2001:470:828b:0:c460:6ed8:7e00:e8f4]:52765
remote=[2a03:2880:20:4f06:face:b00c:0:1]:443 FD 33 flags=1
2014/10/16 18:40:17.951 kid1| client_side.cc(4049) httpsPeeked: bumped
HTTPS server: www.facebook.com
2014/10/16 18:40:17.951 kid1| client_side_request.cc(265)
~ClientHttpRequest: httpRequestFree: www.facebook.com:443
2014/10/16 18:40:17.951 kid1| client_side.cc(617) logRequest: logging
half-baked transaction: www.facebook.com:443
2014/10/16 18:40:17.951 kid1| client_side.cc(621) logRequest:
clientLogRequest: al.url='www.facebook.com:443'
2014/10/16 18:40:17.951 kid1| HttpHeader.cc(1531) ~HttpHeaderEntry:
destroying entry 0x30c5fd0: 'Host: www.facebook.com:443'
2014/10/16 18:40:17.951 kid1| client_side.cc(3899) getSslContextStart:
Finding SSL certificate for /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com+Sign=signTrusted in cache
2014/10/16 18:40:17.951 kid1| client_side.cc(3904) getSslContextStart:
SSL certificate for /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com+Sign=signTrusted have found in cache
2014/10/16 18:40:17.952 kid1| client_side.cc(3906) getSslContextStart:
Cached SSL certificate for /C=US/ST=CA/L=Menlo Park/O=Facebook,
Inc./CN=*.facebook.com+Sign=signTrusted is valid
2014/10/16 18:40:17.956 kid1| ctx: enter level  0: 'www.facebook.com:443'
2014/10/16 18:40:17.956 kid1| HttpHeader.cc(1531) ~HttpHeaderEntry:
destroying entry 0x30c0810: 'Host: www.facebook.com:443'

--

-- 
Cheers

Jason Haar
Corporate Information Security Manager, Trimble Navigation Ltd.
Phone: +1 408 481 8171
PGP Fingerprint: 7A2E 0407 C9A6 CAF6 2B9F 8422 C063 5EBB FE1D 66D1

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Robert Watson | 15 Oct 22:43 2014

NET::ERR_CERT_COMMON_NAME_INVALID

Has anyone got this error when performing https proxy? I only receive it on a couple of https websites.  How to fix?
Robert
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Jacques Kruger | 15 Oct 10:41 2014

Question on throughput

Hi,

 

I’ve implemented my fair share of squid proxies over the past couple of years and I’ve always been able to find a solution in the mail archive, but this time around I’m stumped. This is the first time I’ve used squid with a fast (in our context) internet connection, specifically a 4G connection that the provider claims can run up to 100Mbps. Claims aside, my real-world testing is not what I’m expecting. I’ve used two squid instances, one on PFsence (2.7.9) and one on Windows (2.7Stable8) and compared the throughput to a connection without squid and what I’ve found is, when testing with www.speedtest.net the throughput is roughly half with squid compared to a direct connection. I’ve left to configuration pretty much default and have tried to tweak, both without success.

 

What are the directives that have the most effect on throughput?

 

Regards,

 

Jacques Kruger

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
santosh | 15 Oct 09:31 2014

Unable to display splash page on inactive timeout

Hello Team,

I have set-up a squid proxy server and have implemented the URL blocking and
authentication through ldap successfully . Now i have a requirement that the
squid proxy has to timeout inactive authenticated sessions informing the
user to re-login. I followed up the below links as below 

http://wiki.squid-cache.org/ConfigExamples/Portal/Splash
http://thejimmahknows.com/squid-proxy-splash-page-2/

have tried different combinations and i'm not able to make it to work , i
Have posted the acl as below please let me know wherei'm going wrong 

# INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS
#

external_acl_type splash_page ttl=60 concurrency=100 %SRC
/usr/lib/squid3/ext_session_acl -t 80 -b /usr/lib/squid3/session.db

acl existing_users external splash_page

auth_param basic program /usr/lib/squid3/basic_ldap_auth -b
"dc=example,dc=com" -f "(|(uid=%s)(mail=%s))" -h proxy.example.com

acl ldapauth proxy_auth REQUIRED
acl bad_url url_regex "/etc/squid3/badsites.conf"

http_access deny bad_url
http_access allow ldapauth
http_access deny !existing_users
deny_info 511:/var/www/html/info.php existing_users

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Unable-to-display-splash-page-on-inactive-timeout-tp4667887.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Mirza Dedic | 15 Oct 02:05 2014

http_access deny for dstdomain acl not denying access to url.. what am I doing wrong?

Trying to understand what I am doing wrong with my ACLs (yes I've read the ACL guide on squid site.. but still confused).. My client is 172.16.10.101, trying to block access to facebook (and other dstdomain file lists), but it is not working from the client I can still access fb.

Is this because I have this rule below..?

acl localnet src 172.16.0.0/12
http_access allow localnet

Instead of denying everything access and manually maintaining rules, I want to allow http/https access for everything except explicitly defined ACLs (in this case the facebook acl as a test).

I've tried to set debugging to debug_options ALL,1 33,2 to see more info on ACLs (read on some site this is the debug flags to set) but I don't see any ACL details in my access.log file.

my squid.conf (for SQUID 3.3.3) file is below..

acl localnet src 10.0.0.0/8     # RFC1918 possible internal network
acl localnet src 172.16.0.0/12  # RFC1918 possible internal network
acl localnet src 192.168.0.0/16 # RFC1918 possible internal network

acl SSL_ports port 443 8180 8443 563 1494 2598 8531
acl Safe_ports port 80 # http
acl Safe_ports port 81           # http for Pacific Brokerage
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 563 # http
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 8080 8081 8082 8088 8180
acl Safe_ports port 3128         # Squid http server
acl Safe_ports port 1494 2598   # ICA - Citrix
acl Safe_ports port 7000 8000   # Oracle
acl Safe_ports port 9000         # Oracle
acl Safe_ports port 8530 # WSUS
acl Safe_ports port 55905 # WSUS
acl Safe_ports port 1025-65535 # unregistered ports
acl CONNECT method CONNECT

http_access allow localhost manager
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny to_localhost

acl ads dstdomain "/etc/squid/blacklists/ads/domains"
acl adult dstdomain "/etc/squid/blacklists/adult/domains"
acl gambling dstdomain "/etc/squid/blacklists/gambling/domains"
acl fb dstdomain .facebook.com

http_access allow localnet
http_access allow localhost

http_access deny ads adult gambling fb

http_access deny all

http_port 8080
dns_nameservers 172.16.11.3 172.16.11.2 172.16.11.1
visible_hostname www-proxy

hierarchy_stoplist cgi-bin ?

logformat oppy %ts.%03tu %6tr %>a %>A %Ss/%03>Hs %<st %rm %ru %[un %Sh/%<a %mt
access_log daemon:/var/log/squid/access.log oppy
cache_store_log daemon:/var/log/squid/store.log
cache_log /var/log/squid/cache.log
cache_mem 64 MB
logfile_rotate 4
debug_options ALL,1
# ACL Debug Options
# debug_options ALL,1 33,2
# debug_options ALL,1 33,2 28,9
coredump_dir /var/log/squid/squid

shutdown_lifetime 3 seconds
dns_v4_first on
retry_on_error on
forward_max_tries 25
forward_timeout 30 seconds
connect_timeout 30 seconds
read_timeout 30 seconds
request_timeout 30 seconds
persistent_request_timeout 1 minute

cache_dir ufs /var/cache/squid 100 16 256
cache_mgr ittechs <at> domain.com

snmp_port 0
icp_port 0
htcp_port 0

refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Thiago Farina | 14 Oct 20:14 2014
Picon

cache-control

Hi squiders,

We want to move the following Go code into squid, as we already have
squid in front of our Go server.

The code is:

func makeResourceHandler() func(http.ResponseWriter, *http.Request) {
  fileServer := http.FileServer(http.Dir("./"))
  return func(w http.ResponseWriter, r *http.Request) {
  w.Header().Add("Cache-Control", string(300))
    fileServer.ServeHTTP(w, r)
  }
}

and in the main() function we have:

http.HandleFunc("/res/", autogzip.HandleFunc(makeResourceHandler()))

The only thing close to this I found was 'header_access Cache-Control
allow all'.

What is the proper way to do this?

Thanks all (for reading) in advance, for any reply.

Any hint/point is appreciate.

Best regards,

--

-- 
Thiago Farina
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Mirza Dedic | 14 Oct 19:37 2014

Best way to deny access to URLs in Squid 3.3.x?

Just curious, what are some of you doing in your Squid environment as far as URL filtering goes? It seems there are a few options out there.. squidguard... dansguardian.. plain block lists.

What is the best practice to implement some sort of block list into squid? I've found urlblacklist.com that has a pretty good broken down URL block list by category, what would be the best way to go.. use dansguardian with this list or set it up in squid.conf as an "acl dstdomain" and feed in the block list file without calling an external helper application?

Thanks.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Gmane