Steven Shi | 28 Aug 05:21 2015

Re: Re: Proxying From Directory To App On Port

Hey Kurtis,

I was in the middle of replying and then something clicked for me.  I spent the day today working on this one more time and ended up getting it working.  I believe my fundamental understanding of a proxy was simply incorrect.

Your comment about the URI prefix was what made it click for me.  

Basically all my requests on the client front end were formatted like so "/command".  I mistakenly thought that the client would run these with respect to the host it came from (Jetty returns the client as a result of the initial proxied GET to localhost:2000).  Apparently, the client ends up realizing it is no longer on localhost:2000 but is now instead on localhost:80 and was calling the requests as such.  This would also make sense why "/command" would result in the GET/POST URL as "localhost:80/command".

After your comment, I realized that requests also need to have the URI with respect to the original host rather than the proxy on localhost:2000.  As such I changed every instance of "/command" to "./command" and now everything works perfectly.

I did in fact enable the high levels of debug output and could make no sense of them unfortunately...somehow one request ended up in 200 lines of garble (I cleared the log beforehand).  

Anyways, I was wrong in thinking that the proxy worked with localhost:2000 as if it were inside a box and simply gave inputs and took outputs.  It seems that the proxy directly serves the client to the original URL of /projects/CS32Brewer...

I was afraid that I'd have to configure the SSL as well given that I previously had complaints from the server of insecure requests to http; fortunately, something just clicked and it works fine....

Thank you for all your help and patience, let me know if you'd like any more details.  I apologize for my vagueness and ambiguity so thank you for sticking with me on it.

On Thu, Aug 27, 2015 at 12:12 AM, Kurtis Rader <krader <at>> wrote:
On Tue, Aug 25, 2015 at 10:25 PM, Steven Shi <steven200796 <at>> wrote:
The "app" consists of a maven built java back-end which utilizes the maven build of apache spark ("").  To my understanding, the spark API initiates a Jetty server which listens on a specified port for GET / POST requests.  It then either processes the data sent in a POST request or simply returns the html/css/js page for a GET request.  

You really need to stop using the term "app" for every piece of software involved in this problem. You've used "app" to refer to the jQuery client front end and the Java backend and possibly the Apache HTTP middle layer. Not to mention the URI prefix being proxied.
When I said "the port 2000 is lost", I meant that if I go in to the app through the proxy /app and attempt to issue GET / POST commands via the jQuery front end interface, google's developer console shows the remote address as my_ip:(80, 443) and the request URL is simply my_hostname/command.  Likewise, the origin and the host are incorrect.

I can't tell from the Chrome developer console snapshot if the "POST" request that failed with a 404 status was due to a redirect or if it was the original request made by the browser client code. If it was the original request then something is wrong in how you have told the client code which URI to use. If it was due to a redirect the question becomes whether the redirect is from Apache or your Spark backend.

I'm willing to bet the problem is the Spark backend is issuing a HTTP redirect that leads to the Apache middle layer no knowing how to handle the subsequent request. The solution might be as simple as enabling rewriting of proxied HTML with this directive in your virtual host:

ProxyHTMLEnable On

That will require an appropriate "LoadModule proxy_html_module" in your Apache config.

Your Apache config proxies the URI prefix "/projects/CS32Brewer/". Is that the URI prefix your browser client code is using? Does the Apache access log show requests from the browser with the URI prefix? Have you enabled higher levels of debug output from Apache as I suggested? If so what does the Apache error log show for a failing request?
If HTTP_HOST is the "Host:" shown in google dev console "Request Headers", the app is not setting it correctly on /app (it is "my_hostname" rather than "my_hostname:2000"). 
If HTTP_HOST is not what's said above, then how do I check what it is?

What do you mean by "correctly on /app"? Nowhere in the data you've provided so far, in particular the Apache virtual host configuration, is a URI prefix of "/app" referenced. If you're using that as a synonym for "/projects/CS32Brewer" please say so.

Kurtis Rader
Caretaker of the exceptional canines Junior and Hank

Haag, Jason | 27 Aug 20:28 2015

Apache Rewrite Rules & Physical Directories

Hi Apache Users,

I'm a newb with apache and could use some help. I'm using per
directory rewrites and delivering files via RDF and HTML using apache

Here's my situation:

I have several independent html & rdf files in a directory named "/verbs/"

For example,

/verbs/answered.html, answered.rdf, answered.jsonld
/verbs/asked.html, asked.rdf, asked.jsonld

I also have a singe representation list of these same verbs, but as
one html named index.html

For example,


I would like to be able to GET both the independent verbs and the
entire list using Accept header request.

I can currently GET the independent terms and it returns the
appropriate content type based on the Accept request

For example,

curl --header "Accept: application/rdf+xml" -L

However, if I attempt to GET the entire list of verbs it ALWAYS
returns HTML. For example,

curl --header "Accept: application/rdf+xml" -L

I have two .htacces files. One is at the /verbs/ directory level and
one is up a higher directory. The .htaccess at the higher directory is
pasted below.

It appears that my problem is that I have an actual physical directory
with the same name "/verbs/" as my Rewrite rule name ^verbs$. For
example, if I change the rewrite rule name to ^verbset$ my GET
requests work fine and return any of the content types that are

I would rather not work around this by naming the rewrite rule name,
and would prefer to learn more about apache instead of trying to apply
dated linked data recipes. Are there any specific directives that
might be causing this conflict? Or is it a limitation that rewrite
rule names can't be the same name as physical directories on the
server? Thanks in advance for any advice or resources!

# Turn off MultiViews
Options -MultiViews

# Rewrite engine setup
RewriteEngine On

# Directive to allow Cross origin requests
Header add Access-Control-Allow-Origin "*"
Header add Access-Control-Allow-Headers "origin, x-requested-with, content-type"
Header add Access-Control-Allow-Methods "PUT, GET, POST, DELETE, OPTIONS"
# Directive to ensure *.rdf files served as appropriate content type,
# if not present in main apache config
AddType text/html .html
AddType application/rdf+xml .rdf
AddType  text/turtle   .ttl
AddType application/ld+json .jsonld

RewriteBase /

# Rewrite rule to serve HTML content from the vocabulary URI if requested
RewriteCond %{HTTP_ACCEPT}
RewriteCond %{HTTP_ACCEPT} text/html [OR]
RewriteCond %{HTTP_ACCEPT} application/xhtml\+xml [OR]
RewriteCond %{HTTP_USER_AGENT} ^Mozilla/.*
RewriteRule ^verbs$ [R=303]

# Rewrite rule to serve RDF/XML content from the vocabulary URI if requested
RewriteCond %{HTTP_ACCEPT} application/rdf\+xml
RewriteRule ^verbs$ [R=303]

# Rewrite rule to serve Turtle content from the vocabulary URI if requested
RewriteCond %{HTTP_ACCEPT} text/turtle
RewriteRule ^verbs$ [R=303]

# Rewrite rule to serve JSON-LD content from the vocabulary URI if requested
RewriteCond %{HTTP_ACCEPT} application/ld\+json
RewriteRule ^verbs$ [R=303]

# Choose the default response
# ---------------------------

# Rewrite rule to serve the RDF/XML content from the vocabulary URI by default
RewriteRule ^verbs$ [R=303]
Kiruthiga Balakrishnan | 27 Aug 08:03 2015

*** glibc detected *** /usr/sbin/apache2: corrupted double-linked list


I am running Apache2 on a ubuntu machine. Mines is a PHP web. Recently I have started experiencing latency issues in my website and eventually the site crashes down during latency peaks. I found these in the error log but not sure about how to proceed further to resolve this.

*** glibc detected *** /usr/sbin/apache2: corrupted double-linked list: 0x00007fa6518dca20 ***
*** glibc detected *** /usr/sbin/apache2: corrupted double-linked list: 0x00007fa6518dce70 ***
*** glibc detected *** /usr/sbin/apache2: corrupted double-linked list: 0x00007fa651560ec0 ***
*** glibc detected *** /usr/sbin/apache2: corrupted double-linked list: 0x00007fa65174acb0 ***
*** glibc detected *** /usr/sbin/apache2: corrupted double-linked list: 0x00007fa651749ad0 ***

Please help suggesting a solution for this.


aparna Puram | 26 Aug 14:21 2015

mod_jk to mod_proxy and mod_proxy_balancer conversion

I have a requirement to get the following lines from uriworkermappling.xml which used mod_jk to be converted to mod_proxybalancer.





I have the above properties in urimappingworker file. However, I would want to implement the same settings with proxy balancer. Need your suggestions on how to implement the above with proxy balancer?

can I make it:

ProxyPass /path1 ajp://localhostserver1:8009/path1 route=route1
ProxyPass /path2 ajp://localhostserver2:8009/path2 route=route1

ProxyPass /path3 ajp://localhostserver3:8009/path3 route=route1
ProxyPass /path4 ajp://localhostserver3:8009/path4 route=route1

ProxyPass /path5 ajp://localhostserver4:8009/path5 route=route1

<Location "/*.png">
    ProxyPass "!"

vice versa for all?? 

would that work this way?

Aparna Puram
Good Guy | 26 Aug 00:36 2015

Local Machine

Has anybody seen a good guide to install Apache, PHP, and MySQL in 
ubuntu local desktop.  I want to avoid using XAMPP or anything similar 
as I like to know what is going where etc.

Tom Browder | 25 Aug 12:30 2015

CGI Error with Readonly Database

I am using Apache 2.4.16 and trying to get my CGI programs to work
after transferring to a new remote host.

After much debugging I am finally getting to the point where my script
is trying to insert a new or updated record in my SQLite db and I get
this error:


Sorry, the following error has occurred:

DBD::SQLite::db do failed: attempt to write a readonly database at line 312.

The permissions in all my Apache server dirs and files are set by a
Perl script like this:

  # set perms on data
  # completely private
  # $TODIR is the appropriate document too dir for each host and aliased dir
  my $cmd1 = "chown -R web-user:web-content $TODIR";
  my $cmd2 = "find $TODIR -type f -exec chmod 640 {} \\;";
  my $cmd3 = "find $TODIR -type d -exec chmod 750 {} \\;";
  my $cmd4 = "find $TODIR -name '*.cgi' -exec chmod 750 {} \\;";
  my $cmd5 = "find $TODIR -name '*.pl' -exec chmod 750 {} \\;";

In addition, the html file for one index.html is set 'chmod +X' for
SSI use (for the XbitHack).

I have two special dirs shared by all my virtual hosts set this way in
my httpd.con file:

  # need common cgi directory for all web sites
  ScriptAlias   /cgi-bin-cmn/      /home/web-server-common/cgi-bin-cmn/
  # need common data directory for all web sites
  Alias         /data-cmn/         /home/web-server-common/data-cmn/

Those directories happen to contain the script and db which result in the error.

Running 'ls -l' on the last two directories yields:

root <at> dedi2:/home# ls -lR web-server-common
total 8
drwxr-s--- 2 web-user web-content 4096 Aug 25 00:00 cgi-bin-cmn
drwxr-s--- 2 web-user web-content 4096 Aug 21 20:36 data-cmn

total 56
-rw-r----- 1 web-user web-content   648 Aug 23 22:12
-rw-r----- 1 web-user web-content 14226 Aug 22 22:01
-rw-r----- 1 web-user web-content 16295 Aug 25 00:00
-rwxr-x--- 1 web-user web-content   409 Aug 24 12:23
-rwxr-x--- 1 web-user web-content   250 Aug 21 13:28 show-envvars.cgi
-rwxr-x--- 1 web-user web-content  8087 Aug 24 01:28 show-site-statistics.cgi
-rwxr-x--- 1 web-user web-content  3539 Aug 24 22:39 update-site-statistics.cgi

total 2012
-rw-r----- 1 web-user web-content 2059264 Aug 21 15:51

Any ideas, or do you need more info from me?


Best regards,

Tom Browder | 25 Aug 12:10 2015

SSI best practice: XbitHack or .shtml

Anyone have an opinion of the best way to indicate an SSI file to be scanned?


Steven Shi | 25 Aug 05:30 2015

Proxying From Directory To App On Port

I'm trying to proxy /app to localhost:2000 (where the app is hosted).  Unfortunately, whenever the app makes a GET/POST request, the port 2000 is lost and the request is made to localhost:80 rather than localhost:2000.

I feel as if the solution is something simple but I haven't been able to discover it from three days of trial and error.
Nikolay Yakubitskiy | 23 Aug 21:04 2015

Websockets behind Apache and Nginx proxy, Connection and Upgrade headers not present

I have a problem. Apache listens on a white ip and proxies all requests /ssd on nginx that proxies requests /city-dashboard to another server with websockets.

Apache default.conf:

ProxyRequests On ProxyPreserveHost On <Location /test> ProxyPass "" ProxyPassReverse "" Allow from All Header set Connection "upgrade" Header set Upgrade "websocket" </Location>


map $http_upgrade $connection_upgrade { default upgrade; '' close; } map $http_sec_websocket_key $upgr { "" ""; # If the Sec-Websocket-Key header is empty, send no upgrade header default "websocket"; # If the header is present, set Upgrade to "websocket" } map $http_sec_websocket_key $conn { "" $http_connection; # If no Sec-Websocket-Key header exists, set $conn to the incoming Connection header default "upgrade"; # Otherwise, set $conn to upgrade } include /etc/nginx/conf.d/*.conf;

nginx default.conf:
location /city-dashboard { rewrite ^/city-dashboard\/(.*)$ /$1 break; proxy_pass; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header Upgrade $upgr; proxy_set_header Connection "Upgrade\n\r"; proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key; proxy_read_timeout 100; proxy_redirect off; proxy_set_header X-Forwarded-Proto $scheme; add_header Front-End-Https on; }

Request headers:

GET wss:// HTTP/1.1 Host: Connection: Upgrade Pragma: no-cache Cache-Control: no-cache Upgrade: websocket Origin: Sec-WebSocket-Version: 13 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.157 Safari/537.36 Accept-Encoding: gzip, deflate, sdch Accept-Language: en-US,en;q=0.8 Sec-WebSocket-Key: RlhNyYNipJ1RUOU7nl4xMA== Sec-WebSocket-Extensions: permessage-deflate; client_max_window_bits

Response headers:

HTTP/1.1 101 Switching Protocols Server: nginx/1.8.0 Date: Sat, 22 Aug 2015 10:05:02 GMT Sec-WebSocket-Accept: e7UKY/y5mIi8RTmhLQgFJ566dfo=

Response header lacks Upgrade and Connection, if done on site management with nginx, I get an error 400 and websockets still does not work out. Remove the chain Apache does not work, because it is at another organization. Can someone with such experienced and offer some workarounds? or just have some decisions to make it work.

Nikolay Yakubitskiy
steseagal87 <at>

Sterpu Victor | 23 Aug 08:51 2015

SSL - How client certificates are verified?

I have a web page that asks for client certificate.
These are the options for this:
SSLVerifyClient require
SSLVerifyDepth 10

How does SSLVerifyClient  verifies the client certificate?
This option protects against certificates manual made with a fake public-private key pair?
So can someoane make a certificate identical with the original, attach another set of public and private keys and pretend to be someoane else?
Thank you

This email has been checked for viruses by Avast antivirus software.

Acest mesaj de posta electronica si documentele aferente sunt confidentiale. Este interzisa distribuirea, dezvaluirea sau orice alt mod de utilizare a lor. Daca nu sunteti destinatarul acestui mesaj, este interzis sa actionati in baza acestor informatii. Citirea, copierea, distribuirea, dezvaluirea sau utilizarea in alt mod a informatiei continute in acest mesaj constituie o incalcare a legii. Daca ati primit mesajul din greseala, va rugam sa il distrugeti, anuntand expeditorul de eroarea comisa. Intrucat nu poate fi garantat faptul ca posta electronica este un mod sigur si lipsit de erori de transmitere a informatiilor, este responsabilitatea dvs. sa va asigurati ca mesajul (inclusiv documentele alatura te lui) este validat si autorizat spre a fi utilizat in mediul dvs.

John Donnelly | 21 Aug 23:31 2015

Building Apache 2.2.20 "standalone" and the role of server/exports.c


  I am working on a project to update an old version of Apache ( 2.0.65 ) to  version 2.2.20 on an PPC embedded Linux platform in a 
 Cross-compiling environment.  This is outside of the autoconfig (./configure) tools +procedure that one would typically use to build Apache. 

 I am stuck at the linkage phase right now of httpd. I have managed to compile all of the modules/*  and server/*.c but  I am not sure what the role of server/exports.c serves in the build . It appears to be generated from an awk step in the makefile on a X86 environment. When I generate a exports.c in for my build I end getting lots of un-defined references when I link the object files and libs even though the symbols appear in the corresponding libapr/libaprutil libraries  .

Anyone have experience is building Apache in this type of standalone environment outside of autoconfig ? 

Suggestions welcomed.

Thank you.