Sebastian Goicochea | 20 Jul 19:11 2015
Picon

Trying to eliminate field from header reply

Hello, I'm trying to make some modifications to Squid source code. I 
want to eliminate some fields from the header reply that the server sends.
I've been searching through the code but I can't seem to find the exact 
point to make it work.
Does anyone have any clue where (as in which file/s) should I make the 
modification?

Thanks,
Sebastian
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Stakres | 20 Jul 17:27 2015
Picon

How to get the correct size of a denied object ?

Hi All,

As you know, when an object is denied by an ACl or other, the size of the
object in the log file is the size of the ERR_* page.
Is there a way to get the correct/real size of the blocked object ?

I know the url is denied before squid gets the object from internet, but it
should be nice to have a special action/option to write to the access.log
the real size instead the ERR page size.
Because here we don't care the size of the ERR page, knowing the real size
of the denied object is much more important, not meaning the size we blocked
but what is the size we have not downloaded, this is a valuable data with
clients...

Possible to plan a solution for the next build ?
Just get the size by headers, deny the object then write the correct size to
the access.log 

Thanks in advance.

Bye Fred

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-get-the-correct-size-of-a-denied-object-tp4672332.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
(Continue reading)

HackXBack | 18 Jul 15:11 2015
Picon

FATAL: xcalloc: Unable to allocate 18446744073527142243 blocks of 1 bytes!

cache.log

Squid Cache (Version 3.4.13-20150501-r13224): Terminated abnormally.
CPU Usage: 0.052 seconds = 0.024 user + 0.028 sys
Maximum Resident Size: 83440 KB
Page faults with physical i/o: 0
2015/07/18 23:06:55 kid1| Set Current Directory to /var/spool/squid
2015/07/18 23:06:55 kid1| Starting Squid Cache version
3.4.13-20150501-r13224 for x86_64-unknown-linux-gnu...
2015/07/18 23:06:55 kid1| Process ID 30443
2015/07/18 23:06:55 kid1| Process Roles: worker
2015/07/18 23:06:55 kid1| With 65535 file descriptors available
2015/07/18 23:06:55 kid1| Initializing IP Cache...
2015/07/18 23:06:55 kid1| DNS Socket created at 0.0.0.0, FD 7
2015/07/18 23:06:55 kid1| Adding nameserver 127.0.0.1 from squid.conf
2015/07/18 23:06:55 kid1| helperOpenServers: Starting 40/50 'ssl_crtd'
processes
2015/07/18 23:06:55 kid1| helperOpenServers: Starting 1/1 'rewriter.pl'
processes
2015/07/18 23:06:55 kid1| helperOpenServers: Starting 1/1 'storeid.pl'
processes
2015/07/18 23:06:55 kid1| Logfile: opening log /var/log/squid/access.log
2015/07/18 23:06:55 kid1| WARNING: log name now starts with a module name.
Use 'stdio:/var/log/squid/access.log'
FATAL: xcalloc: Unable to allocate 18446744073527142243 blocks of 1 bytes!

my configure option.

./configure --prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin
--libexecdir=/usr/lib/squid --sysconfdir=/etc/squid --localstatedir=/var
(Continue reading)

Laz C. Peterson | 17 Jul 15:42 2015
Picon

Redirects error for only some Citrix sites

Hello all,

Very weird issue here.  This happens to only select Citrix support articles.  (For example, http://support.citrix.com/article/CTX122972 when searching Google for “citrix netscaler expired password” which is the top link in my results, or also searching for the same article directly on Citrix support site.)

This is a new install of Squid 3 on Ubuntu 14.04.2 (from Ubuntu repository).  When clicking the Google link, I get “too many redirects” error, saying that possibly the page refers to another page that is then redirected back to the original page.

I tried debugging but did not find much useful information.  Has anyone else seen behavior like this?

~ Laz Peterson
Paravis, LLC
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
HackXBack | 17 Jul 02:03 2015
Picon

Re: assertion failed: comm.cc:178: "fd_table[conn->fd].halfClosedReader != NULL"

using
range_offset_limit none
ovet HTTP sites work without any assertoin error

but using it with HTTPS sites make this assertion error,
so there are problem between this option and 443 connection, 
the problem is in https partial content only

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/assertion-failed-comm-cc-178-fd-table-conn-fd-halfClosedReader-NULL-tp4670979p4672299.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
HackXBack | 17 Jul 01:40 2015
Picon

redirect TCP_NONE

i have an idea for solve problems with sites and app's that work on port 443
but cant establish connection with squid,
i see that when this connection cant established the TCP_NONE appear in
access.log,
then why we cant use an option that when this tcp_none come on some app
redirect it to TCP_TUNNEL and then it will bypassed and the connection will
be established without decryption but at minimum it will work automatically
without make to that ip ssl_bump none x.x.x.x
who support me ? 

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/redirect-TCP-NONE-tp4672298.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
johnzeng | 16 Jul 17:27 2015
Picon

a problem about reverse proxy and $.ajax


Hello dear All :

i am writing testing download rate program recently ,

and i hope use reverse proxy ( squid 3.5.x) too ,

but if we use reverse proxy and i found Ajax won[t succeed to download

, and success: function(html,textStatus) -- return value ( html ) is blank .

if possible , please give me some advisement .

squid config

http_port 4432 |accel| vport defaultsite=10.10.130.91
|cache_peer 127.0.0.1 parent 80 0 default name=ubuntu-lmr  |

Ajax config

$.ajax({
type: "GET",
url: load_urlv,
cache: false,
mimeType: 'text/plain; charset=x-user-defined',

beforeSend: function(){
$('#time0').html('<blink>download file...</blink>').show();
},

error: function(){
alert('Error loading XML document');
},
success: function(html,textStatus)
{

...........................

}

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Sebastian Kirschner | 16 Jul 16:51 2015
Picon

Peek and Splice error SSL_accept failed

Hi I´m using squid with version 3.5.6 in an debian test system.

I try to bypass some sites using the "ssl::server_name" acl , to do that I need to peek the connection first to
decide if should be spliced or bumped.

But if I use peek at Step 1 , errors "client_side.cc(4245) clientPeekAndSpliceSSL: SSL_accept failed."
errors appear in the cache.log

Squid was built with following options
./configure --build=x86_64-linux-gnu \
--prefix=/usr \
--includedir=${prefix}/include \
--mandir=${prefix}/share/man \
--infodir=${prefix}/share/info \
--sysconfdir=/etc \
--localstatedir=/var \
--libexecdir=${prefix}/lib/squid3 \
--srcdir=. \
--disable-maintainer-mode \
--disable-dependency-tracking \
--disable-silent-rules \
--datadir=/usr/share/squid3 \
--sysconfdir=/etc/squid3 \
--mandir=/usr/share/man \
--enable-inline \
--disable-arch-native \
--enable-async-io=8 \
--enable-storeio=ufs,aufs,diskd,rock \
--enable-removal-policies=lru,heap \
--enable-delay-pools \
--enable-cache-digests \
--enable-icap-client \
--enable-follow-x-forwarded-for \
--enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB \
--enable-auth-digest=file,LDAP \
--enable-auth-negotiate=kerberos,wrapper \
--enable-auth-ntlm=fake,smb_lm
\
--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group \
--enable-url-rewrite-helpers=fake \
--enable-eui \
--enable-esi \
--enable-icmp \
--enable-zph-qos \
--enable-ecap \
--disable-translation \
--with-swapdir=/var/spool/squid3 \
--with-logdir=/var/squid/logs \
--with-pidfile=/var/run/squid3.pid \
--with-filedescriptors=65536 \
--with-large-files \
--with-default-user=proxy \
--with-openssl \
--with-open-ssl=/etc/ssl/openssl.cnf \
--enable-ssl-crtd \
--enable-linux-netfilter \
'CFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -Wall' \
'LDFLAGS=-fPIE -pie -Wl,-z,relro -Wl,-z,now' \
'CPPFLAGS=-D_FORTIFY_SOURCE=2' \
'CXXFLAGS=-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security'

The squid.conf
http_port 192.168.1.104:3128 intercept
https_port 192.168.1.104:3129 intercept ssl-bump generate-host-certificates=on
dynamic_cert_mem_cache_size=10MB cert=/etc/squid3/ssl_cert/myCA.pem
http_port 127.0.0.1:3120

icp_port 0
dns_v4_first on
pid_filename /var/run/squid/squid.pid
cache_effective_user proxy
cache_effective_group proxy
error_default_language de-de
visible_hostname pfsense
cache_mgr admin <at> test
access_log /var/squid/logs/access.log
cache_log /var/squid/logs/cache.log
cache_store_log none
netdb_filename /var/squid/logs/netdb.state
pinger_enable on
pinger_program /lib/squid3/pinger
sslproxy_capath /etc/ssl/certs
sslcrtd_program /lib/squid3/ssl_crtd -s /var/squid/certs -M 4MB -b 2048
sslproxy_cert_error allow all

logfile_rotate 7
debug_options rotate=7
shutdown_lifetime 3 seconds
# Allow local network(s) on interface(s)
acl localnet src  192.168.1.0/24
forwarded_for on
uri_whitespace strip

cache_mem 30 MB
maximum_object_size_in_memory 128 KB
memory_replacement_policy heap GDSF
cache_replacement_policy heap LFUDA
cache_dir ufs /var/squid/cache 100 16 256
minimum_object_size 0 KB
maximum_object_size 400 KB
offline_mode off
cache_swap_low 90
cache_swap_high 95
cache allow all

# Add any of your own refresh_pattern entries above these.
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320

# Setup some default acls
# From 3.2 further configuration cleanups have been done to make things easier and safer. The manager,
localhost, and to_localhost ACL definitions are now built-in.
# acl localhost src 127.0.0.1/32
acl allsrc src all
acl safeports port 21 70 80 210 280 443 488 563 591 631 777 901  3128 3127 1025-65535
acl sslports port 443 563

acl purge method PURGE
acl connect method CONNECT

# Define protocols used for redirects
acl HTTP proto HTTP
acl HTTPS proto HTTPS
acl allowed_subnets src 192.168.1.0/24
http_access allow manager localhost

http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny !safeports
http_access deny CONNECT !sslports

request_body_max_size 0 KB
delay_pools 1
delay_class 1 2
delay_parameters 1 -1/-1 -1/-1
delay_initial_bucket_level 100
delay_access 1 allow allsrc

# Debugging if needeed
debug_options all,2 16,0 18,0 19,0 22,0 47,0 79,0

# Setup allowed acls
# Allow local network(s) on interface(s)
http_access allow allowed_subnets
http_access allow localnet
# Default block all to be sure
http_access deny allsrc

acl step1 at_step SslBump1
acl step3 at_step SslBump3
acl bypass ssl::server_name .sparkasse.de, .internet-filiale.net

ssl_bump peek step1
ssl_bump splice bypass
ssl_bump bump step3

always_direct allow all
ssl_bump bump all
ssl_bump server-first

Mit freundlichen Grüßen / Best Regards

Sebastian 
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Tory M Blue | 16 Jul 01:38 2015
Picon

3.5.6 still get if-cached errors sourced from a sibling.


Was hoping this was fixed in 3.5.6

The following error was encountered while trying to retrieve the URL: http://view-dev.eng.domain.net/rimfire/adm/search?

Valid document was not found in the cache and only-if-cached directive was specified.

You have issued a request with a only-if-cached cache control directive. The document was not found in the cache, or it required revalidation prohibited by the only-if-cached directive.

Your cache administrator is webmaster.


This only happens when I use HTCP between 2 siblings that otherwise have paths to other parents which are origins.


This should work and thought that the release notes indicated this was fixed.


Thanks

Tory

_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
markme | 15 Jul 23:57 2015

Blocked DNS request from IDS causes Squid to not work

I've been running Squid 3.3.8 on CentOS 7 for a few months now and every now
and then I will get a "Suspicious .pw DNS query" alert from my IDS which was
caused by Squid and it will be blocked. When this happens most clients start
to get a 503 error or NONE_ABORTED/000 in the access log and they can't
access the internet. To fix it I have just been issuing a reconfigure on
Squid and that seems to fix the problem until it happens again. Just that
one particular DNS query to our local DNS server gets blocked but everything
else goes through. Any ideas on what might be making Squid require a
reconfigure to start working again? Thanks!

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/Blocked-DNS-request-from-IDS-causes-Squid-to-not-work-tp4672281.html
Sent from the Squid - Users mailing list archive at Nabble.com.
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users
Stanford Prescott | 15 Jul 17:38 2015
Picon

ufdbGuard cannot check ssl certs

Hi all.

I've seen some folks asking questions about ufdbGuard and squidGuard here, so I thought I would give it a try, too.

I am trying to integrate ufdbGuard to replace a working install of squidGuard on our Smoothwall Express firewall distro with Squid 3.5.5. Hopefully, if I can get it working, it is something I will be able to provide to the Smoothwall community for our thousands of users. Maybe some of those will subscribe to the URLFilterDB? :-)

Anyway, I am to the point where I am trying to start ufdbGuard from the command line just using "/usr/sbin/ufdbGuard" for testing. I finally got rid of all error messages except one,

FATAL Error: Cannot perform mandatory check of SSL certificates ****
Core dumped

I have read through the ufdbGuard reference manual and googled the error and can't seem to find anything that deals with troubleshooting this error.

My compile options are

--prefix=/usr --with-ufdb-user=squid --localstatedir=/var/smoothwall \
               --with-ufdb-config=/var/smoothwall/ufdbguard \
               --with-ufdb-logdir=/var/log/ufdbguard --with-ufdb-dbhome=/var/smoothwall/ufdbguard/blacklists \
               --with-ufdb-images_dir=/httpd/html/ui/img/ufdbguard --with-ufdb-piddir=/var/run

Am I just not starting ufdbGuard correctly? Is it something else that causes ufdbGuard from being unable to check the SSL certs?

Thanks.

Stan
_______________________________________________
squid-users mailing list
squid-users <at> lists.squid-cache.org
http://lists.squid-cache.org/listinfo/squid-users

Gmane