Kein Name | 11 Apr 09:26 2014
Picon

request_header_add question

Hello List,

at the moment I need to use the request_header_add directive to supply
information to a cache_peer backend.
I intended to use:
request_header_add X-Authenticated-User "%ul"
but the "%ul" is expanded to a dash (-) and I wonder why and how I can
submit the authenticated to user to my backend.
Can someone give me a hint?

Thanks!
Regards
Stefan König

Amos Jeffries | 11 Apr 07:47 2014
Picon

Re: Squid Question about method GET

On 10/04/2014 2:27 a.m., MIGUEL ANGEL AGUAYO ORTUÑO wrote:
> 
> 
> I had this config befor
> 
> acl my_url dstdomain jakjak.dit.upm.es
> redirector_access allow my_url
> redirect_children 1
> redirect_rewrites_host_header off
> redirect_program /etc/squid/dashcheck.pl
> 
> but this configuration only aims for the destiny domain
> 
> 
> and I'm trying to use this configuration to match the file types I want
> 
> acl my_url urlpath_regex \.(mpd|m4s)$
> redirector_access allow my_url
> redirect_children 1
> redirect_rewrites_host_header off
> redirect_program /etc/squid/dashcheck.pl
> 
> but the thing is that when I enter
> 
> http://jakjak.dit.upm.es/mpd/sintel.mpd
> 
> It doesnt enter to the redirector
> 
> why??
> 
(Continue reading)

Strafe | 10 Apr 17:56 2014
Picon

www.earth.com/moon insted of moon.earth.com ?

Can someone please advise me about a problem that I have.

I'v deployed Squid3 Reverse Proxy Server. I have one server in the internal
network that is forwarded through Squid.

My external network is called earth.com and the internal server is called
moon.
I'v managed to set Squid to forward packets that come for moon.earth.com to
internal server moon which is on IP 192.168.1.10

My question is: How can I setup Squid to forward packets when it receives
http://www.earth.com/moon instead of the current setup -
http://moon.earth.com

My current config lines are these:
-----------------------------------

acl issues_users dstdomain moon.earth.com

http_access allow moon_users

cache_peer 192.168.1.10 parent 8080 0 no-query originserver name=moon
cache_peer_domain moon moon.earth.com

cache_peer_access moon allow moon_users
cache_peer_access moon deny all

--
View this message in context: http://squid-web-proxy-cache.1019090.n4.nabble.com/How-to-change-redirection-path-to-forward-to-www-earth-com-moon-insted-of-moon-earth-com-tp4665521.html
Sent from the Squid - Users mailing list archive at Nabble.com.
(Continue reading)

Nick Hill | 10 Apr 14:28 2014
Picon

Re: Cache Windows Updates ONLY

I found the discussion on the web post. On Nabble, which I presume
will not feed back to this list. I located the discussion forum from
the web site, have subscribed, and hope the message will be useful. A
web interface to this mailing list could be very useful to capture
important information from those users who seldom have something to
add.

I use a similar configuration on my Squid to the one used by HilltopsGM.

Microsoft have recently released an update 4Gb in size for Windows 8,
with range request downloads. This will likely cause Squid to use
excessive bandwidth. My cache was slaughtering bandwidth until I made
some changes.

it appears  Microsoft now use psf files, which appear to cache OK.

#Note: include psf files
refresh_pattern -i
microsoft.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf) 4320 80%
43200 reload-into-ims
refresh_pattern -i
windowsupdate.com/.*\.(cab|exe|ms[i|u|f]|asf|wm[v|a]|dat|zip|psf) 4320
80% 43200 reload-into-ims

#Having already defined the windowsupdate ACL,
range_offset_limit -1 windowsupdate
quick_abort_min -1 KB windowsupdate
maximum_object_size 5000000 KB  windowsupdate

#And for a cache replacement policy oriented to
(Continue reading)

Dipjyoti Bharali | 10 Apr 07:41 2014

Fwd: Fwd: Re: Re: WARNING: Forwarding loop detected for:


Hi,

Any clue after seeing my squid.conf. I can see another person facing the 
same problem "Squid brought down by hundreds of HEAD request to itself" 
which would have come to your mailbox's today.

*Dipjyoti Bharali*

*Please consider the environment before printing this email. *
On 08-04-2014 15:51, Dipjyoti Bharali wrote:
> squid.conf is as follows,
>
############################################################################################ 
>
>
> https_port 192.168.1.1:3129 
> cert=/etc/pki/myCA/private/server-key-cert.pem transparent
>
> http_port 192.168.1.1:3128 transparent
>
> acl QUERY urlpath_regex cgi-bin \?
> acl apache rep_header Server ^Apache
> access_log /var/log/squid/access.log squid
> hosts_file /etc/hosts
>
> refresh_pattern ^ftp:// 480 60% 22160
> refresh_pattern ^gopher:// 30 20% 120
> refresh_pattern . 480 50% 22160
>
(Continue reading)

nodje | 10 Apr 03:32 2014
Picon

Squid brought down by hundreds of HEAD request to itself

The Squid instance is started in the morning and stopped at night.

It is daily brought down by what I call "hundreds of HEAD request to
itself".

There's no fixed pattern for the problem.

Sometimes Squid keep working OK with hundreds of those requests,
sometime it just becomes very unresponsive.

Here's what the request look like with my logformat:

09/Apr/2014:17:41:02] 192.168.0.2 TCP_MISS:DEFAULT_PARENT 504 "HEAD
http://192.168.0.2:3128/ HTTP/1.0" Size:333 Ref:"-" Agent:"-"

Squid's server IP is 192.168.0.2, so it's like the server itself
requesting the proxy.
There's nothing running on the same server that I know of that would
access the proxy.

Where a HEAD request like that could come from?

Addional info: the size is always 333 during runtime, but when I do a
restart, when Squid is stopping then I see much higher numbers, in the
thousands first then quickly up until ~20000, then it stops and restarts
and the pattern dissapears for a couple hours.

Any idea of what could cause this to happen?

Windows 7 running SQUID 2.7.STABLE8
(Continue reading)

fordjohn | 10 Apr 02:09 2014
Picon

Squid not sending request to web

Hi All,
I have squid 3.3.8 configured as a transparent proxy.  My router is
redirecting web requests on port 80 to the squid box on port 3128.  The
problem is that the request is returned  url could not be retrieved.  My
configuration file is below.  I am hoping that some one can take a look at
it and help me resolve this issue. The proxy server works when I direct
traffic to port 3128 using the browser.  Router script is below the config
file.
Thanks

#Recommended minimum configuration:
#acl manager proto cache_object
acl localhost src 127.0.0.1/32
acl to_localhost dst 127.0.0.0/8
acl localnet src 192.168.1.0/24 
acl lan src 192.168.1.0/255.255.255.0
acl SSL_ports port 443
acl Safe_ports port 80		# http
acl Safe_ports port 21		# ftp
acl Safe_ports port 443		# https
acl Safe_ports port 70		# gopher
acl Safe_ports port 210		# wais
acl Safe_ports port 1025-65535	# unregistered ports
acl Safe_ports port 280		# http-mgmt
acl Safe_ports port 488		# gss-http
acl Safe_ports port 591		# filemaker
acl Safe_ports port 777		# multiling http

acl bad_url url_regex "/etc/squid3/blockedsites.acl
#acl lan src 192.168.1.0/25
(Continue reading)

Picon
Picon

Squid Question about method GET

Hi all,

My name is Miguel A. Aguayo, I'm working on a proyect and have some 
question about squid

The proyect consist in:

I have a server with Video content formatted in 3GP-DASH standard
I'm transferring this content  to a client using Multicast through FLUTE 
standard

In the client I have apache2 and squid

What I'm trying to do is a squid config that caches the method GET of a 
VLC client trying to reach the content in my server
and pass the value of the method get to a perl executable that checks in 
the apache local server to see if the content is in
the client if it's in the client apache redirect to that content and if 
not let the method GET go to the server for the content

I have implemented a redirector that just send the petitions to 
localhost but with out the intelligence needed to see if the file
exist or not in the client apache.

My question is how to pass the value of method get to a perl program, 
because I haven't seen an example like that

Thanks
Best Regards
--------------------------------------------------------
(Continue reading)

Dipjyoti Bharali | 8 Apr 08:15 2014

WARNING: Forwarding loop detected for:

Hi,

I facing this peculiar issue with certain specific clients. When these 
clients connect to the proxy server, it goes for a toss until i reload 
the service. When examined through the log file, i get this same message 
everytime.

    /2014/04/02 09:00:17| WARNING: Forwarding loop detected for:
    GET / HTTP/1.1
    Content-Type: text/xml; charset=Utf-16
    UNICODE: YES
    Content-Length: 0
    Host: 192.168.1.1:3128
    Via: 1.0 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
    (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
    hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
    1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
    (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
    hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
    1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
    (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
    hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
    1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
    (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
    hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid),
    1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1 hindenberg
    (squid), 1.1 hindenberg (squid), 1.1 hindenberg (squid), 1.1
    .
    .
    .
(Continue reading)

aditya agarwal | 8 Apr 07:11 2014
Picon

Caching not working for Youtube videos

Hi,

Last week we realized that our caching for Youtube videos is broken and not working any more. We are using
'storeurl_rewrite_program' header to rewrite URL for all youtube videos. Following is our
configuration (Squid 2.7):

acl store_rewrite_list url_regex  youtube
cache allow store_rewrite_list 
storeurl_access allow store_rewrite_list 
storeurl_access deny all 
storeurl_rewrite_program VideoCachingPolicy.pl 
storeurl_rewrite_children 1 
storeurl_rewrite_concurrency 100 

We use the following method in VideoCachingPolicy.pl:
1. All youtube requests which have stream_204 and generate_204 in the URL are stored in a log file.
2. In the perl file, for each request we check if it has videoplayback + google/youtube in the URL
3. If Yes, then we read(backwards) the log file generated in step 1.
    a. We check if any of the stream_204/generate_204 requests have a matching CPN field. If yes then we
extract the docid from these requests and generate an internal URL.
    b. Else we append the ID which came with the current request. Note: As this ID is dynamically
generated for every request stream so it doesn't result in cache HIT.

This method was working fine for some time, but now it seems to be broken. On investigating I found two issues:
1. The stream_204/generate_204 requests do not always come before videoplayback requests.
2. Even if stream_204 requests come before videoplayback they are not logged immediately. When I try to
read the file, it doesnt have these lines initially but it has them later on.

Is anyone else facing these issues? Is there any long term solution for caching Youtube videos?

(Continue reading)

Amos Jeffries | 8 Apr 05:56 2014
Picon

Re: How to make squid proxy server cache response with vary: * in header?

On 8/04/2014 3:02 p.m., Sylvio Cesar wrote:
> Amos, how I use squidclient to download a file .flv for example??
> 

squidclient -h shows the full set of parameters available and what they
do. As with any good command line tool.

Via proxy on localhost:
 squidclient http://stackoverflow.com/

Via proxy at example.com (could be an IP if needed):
 squidclient -h example.com http://stackoverflow.com/

Direct from the web server:
 squidclient -p 80 -h stackoverflow.com /

NP: Depending on tool version you may or may not also need the "-j
stackoverflow.com" or " -H 'Host:stackoverflow.com\n' " parameters to
set the Host: header explicitly.
 The -H takes a string of extra headers separated by \n to add to the
request.

Amos

> 2014-04-07 23:35 GMT-03:00 Amos Jeffries <squid3 <at> treenet.co.nz>:
>>
>> "Vary:*" means the response changes depending on factors outside the
>> HTTP protocol for which shared proxies like Squid are 100% unable to
>> determine whether the cached response is appropriate to deliver.
>>  Even if you did store it, the cache would still always MISS.
(Continue reading)


Gmane