Antje | 20 Oct 14:52 2014

[OSM-dev] Extending OSM capability to realise a fictional country


While I have a bit of trouble figuring out how to use a moderated mailing list, SomeoneElse ( recommended me to post this enquiry here in case anyone knows more about the nuts and bolts of a server… so I did.

After nearly two years of editing OpenStreetMap (OSM) with as much knowledge as I know about Greater London (as a way to escape from real-life), I hope to extend my map editing skills to map a fictional NationStates country called Minoa ( I have been thinking about how I would map this country properly for over five years, and recently I considered using OSM technology because of the flexibility it offers in comparison to using Adobe Illustrator or Google Earth.

I appreciate invitations to use OpenGeoFiction, but for this project, codenamed OpenMinoaMap (OMM), I wish to implement a local production-stage server because I want to understand the process of doing it, whether it is automatically or manually. Putting my own project live on the internet is a long-term goal that I do not wish to consider now, due to budget constraints.

Hence, my objective of the project is to create a local production-stage platform with OSM-technology that allows me to create and edit my fictional world on OMM in the same way as I do for the real world on OSM, via Potlatch or JOSM. 

I plan to use a refurbished computer (it was the last to have a 32-bit OS as it was built in 2009) to host my project. My specific requirements include the following:
  • I would like to be able to edit OMM through JOSM, because I envisage Minoa to be a large and detailed country, and I do not want to have to open the whole planet just to add a new building or road that I thought of the night before.
  • I would like to install a couple of extra stylesheets in addition to the standard style to show off my public transport systems. 
  • I would like the software to start up automatically because the server will not operate 24 hours a day to save energy.

I started this topic because while I have already looked around for instructions to set up the Rails Port, I think there may be specific instructions to realise my project (especially with the JOSM capability and server root locations): obviously, I wish to avoid making the mistakes that waste my time. I am also aware of the automated Docker script at, but I am not sure if it requires code changes to suit my requirements.

Here are the specifications:
  • Operating System: Ubuntu 14.04.1 LTS
  • Processor: Intel Core i7 920, 2.67 GHz over 4 physical CPUs
  • Memory: 6144 MB RAM
  • 1st disk: 500 GB, for holding the Ubuntu operating system
  • 2nd disk: 1.5 TB, for holding the production database and tiles, mounted on Ubuntu at /omm, and the web root also being /omm instead of /var/www

I am mostly a Mac user who is quite new to Ubuntu, which I believe is what OpenStreetMap uses. I hope that easy to understand instructions for setting up a production version of the server will help other fictional world mappers make great use of OSM technology to realise often-extensive fantasy countries.

Thanks in advance,

Antje (Amaroussi)
dev mailing list
dev <at>
Joel ... | 20 Oct 04:09 2014

Re: [OSM-dev] Looking for OSM Licensing Clarification

Understood. My use case is prohibited. Remove the deception on the front page. It does not help the business because businesses succeed by way of customer service, respecting others, showing diginty, respect and morals. To refresh your memory, the first few lines of the page reads: 

"Welcome to OpenStreetMap, the project that creates and distributes free geographic data for the world. We started it because most maps you think of as free actually have legal or technical restrictions on their use, holding back people from using them in creative, productive, or unexpected ways."

> Date: Sun, 19 Oct 2014 21:35:58 -0400
> Subject: Re: [OSM-dev] Looking for OSM Licensing Clarification
> From: richard <at>
> To: audiofile <at>
> CC: dev <at>
> The OpenStreetMap license ODbL is what is known as a Share Alike
> license. In that class of licenses, one is granted more permissions
> when one shares and shares alike. If, on the other hand, one wishes
> to keep things for ones exclusive benefit, ones permissions are more
> limited, perhaps even to a null set.
> Again, in very broad terms, one has more freedom if one uses the
> OpenStreetMap data without changing it. Combining OpenStreetMap data
> with other non-OpenStreetMap data sets, is an area where the
> obligations to share are deliberately strong.
> The details matter in this subject area, and it is a benefit for you
> to understand your obligations from the start. Please consider giving
> more details and holding further discussion on the legal-talk <at> mailing
> list[1], which is the ideal location for this topic.
> [1]
<!-- .ExternalClass .ecxhmmessage P { padding:0px; } .ExternalClass body.ecxhmmessage { font-size:12pt; font-family:Calibri; } -->
dev mailing list
dev <at>
Joel ... | 20 Oct 00:46 2014

[OSM-dev] Looking for OSM Licensing Clarification

I think I miss-read what was meant on the OSM front page. It reads "We started it because most maps you think of as free actually have legal or technical restrictions on their use, holding back people from using them in creative, productive, or unexpected ways."

The closest use case I'm wanting OSM for is along the lines of routing and turn-by-turn application. 

Does the license OSM is using permit people to:

1) download OSM data into a local database
2) combine (locally, not uploaded to OSM) additional free data-sets having compatible licenses
3) Leverage proprietary "for a fee" use data that is not saved, but with compatible licenses
4) Develop a new proprietary algorithms that utilizes said data
5) Develop a new proprietary user interface for said data and algorithms with OSM branding/crediting displayed on said interface
6) Earn money from the user interface
7) Keep the finished product that also contains OSM private.

I'm looking to avoid giving some other potential business person or company a copy of what I created. The more I read the OSM license, the more confused I became after possibly miss-understanding the heading of the front page of 

How do you determine if my use case is accepted according to the license OSM is using?

In Chapter 4 the license discusses Derivate Database. It makes a reference to "Publicly Convey this Database". Convey in the dictionary reads "transport or carry to a place". Does Chapter 4 mean my use case is approved of for that particular chapter since I am using the data in an app apposed to exposing the database as an offering in its entirety?

dev mailing list
dev <at>
Peter K | 20 Oct 00:04 2014

[OSM-dev] world wide PBF exports corrupt

Hi there,

it looks like all pbf export servers I know have no recent or corrupt
pbf exports (small file size of 17gb or 6gb instead of 27gb):

Is this a known issue of some export tool? Or do you know an alternative?

(Also it looks like there are no daily export servers anymore?)



-- - Fast & Flexible Road Routing

dev mailing list
dev <at>
Sandor Seres | 18 Oct 22:23 2014

[OSM-dev] Traps in vector mapping

There are many traps developers meet while developing vector-based mapping systems. Consequences are errors, some more visible than the others. Just a few days ago a researcher from Samsung Electronics asked, on the help forum, a legal question related to OSM licencing (“OSM Developer Licenec” 07 Oct). In one of the comments, he was adviced to visit a site already funded by Samsung where they use OSM. So, I did the same and sow the same traps again (and again). The illustrations and exaples are taken from the demo mapping system of this site. If interrested, you may find them via this link (or you may just repeat the same experiments).

I just thought it might be useful to make a comment/warning to those planing vector mapping development. Let me emphasize just two, with highly visible consequences: fragmentation and stripes (brakes/virtual bridges). The source of these traps are most probably in the data preparation realted algorithms.

An inevitable section of any map data preparation is the scale levels generation. The vector scale levels are created in radically different manner and under different criteria compared to the classical raster zoom levels. Though, many raster mapping systems use vector scale levels as input for their zoom levels generation. Anyway, the scale levels generation is unthinkable without the data generalisation which in turn consists of scaling, vector smoothing and filtering. While the vector scaling is a straight forward function, the last two functions are much more complex and as a rule heuristics based.

When scaling vector data down, the vectors become shorter and the curvatures less visible. A smoothing algorithm tries to replace series of consequtive vectors vith a resultant vector without noticeable visual difference. But no metter which vector smoothing algorithm is used the self-crossings are practically unawoidable on area borders especially on thinn sections like on rivers, fjords, channels, peninsulas and so on. Because many fill algorithms requirer simple closed border lines, smoothing algorithms often use an additional procedure to avoid selfcrossings. One of the most used is to divide the area into new independent areas with no selfcrossing borders (between neighbouring sefcrossing points). In this way, the data generalisatio creates many small/tiny areas, eventually, in addition to the set of small areas from the input data. After this, the the filtering/data reduction algoriyhm ignores many of these tiny (subpixel thinn or sized) areas and the area connectivity destruction is a fact. For illustration you can see the the screen-dump (image 1) taken today from the mentioned site.

So, the most probable cause/trap for the connectivity destruction is in the data generalisation algorithm and in the highly fragmented input geometry (like the river banks).

               The second mentioned trap causes thinn stripe errors less visible in rendering but more confusing and present even in higher scale  (zoom-in) values. For illustration see the image 2 taken from the mentioned site also today.

Any vector smoothing algorithm (no matter if distance based, surface based, corridor based …) inherently moves slightly the line geometry nodes/vertices. So, if adjasent area fractions are processed independently, a common border section after the smoothing may result in slightly different poly-line section. In addition, the decimal-to-integer rounding in rendering may just increase this effect (known from the vectorization, or raster-to-vector transformation age as the white pixel effect). Note that the white pixel effect is hard to avoid even in raster based mapping systems if the source areas are fragmented. Namely, the rounded pixel position values may be different when a common border vector is AB for one and BA for the adjasent area in rendering. For illustration see the image 3 made from the BigMap 2 (toner layer) screen dump today. This blurring effect is present in most of the raster mapping systems though less visible when light color shades are used for area decoration/fill (eventually combined with dithering type of pattern). But this is just hiding/masking the white pixel effect in raster zomm levels.

It is maybe worth to note another consequence of the white pixels effect in raster zoom levels. If you are curious and entusiast, take a raster mapping system (example gratia Google Maps, Bing Maps, Slippy Map(s)…), zoom roughly to 1:10mill and go to an area with lots of islands/spots (exampel gratia, south/east part of Finnland in the Baltic sea). You will find large number of confusing spots with various colour shades between the light blue (the sea) and light gray (the land) colours. You may even take a screen dump of that area, load it into an image editor and enlarge it several time for better proof (see the image 4 created in this way from Google Maps).  Besides that these spots are with undeclared meaning they have also a considerable impact on the corresponding PNG format’s size (need 24 bits, or more, per pixel at the input side to the compression).

               Finally, someone might say – so what? I can liv with those traps and errors. Of course that is just fine especially while the mapping service is free. But when you have to pay for the service, data transmission and for the client-side flexibility, aestethics and efficiency then suddenly your criteria may become radically stronger. Also note that with a robust vector data-preparation-tool-chain we can avoid all the mentioned traps/errors and achieve much, much more.


Oslo, 14.10.14. Regards, Sandor


dev mailing list
dev <at>
Martin Raifer | 18 Oct 17:21 2014

Re: [OSM-dev] Overpass API: Getting nodes together with centroids of areas ("POI query")?

Yes. Just replace the first print statement (`<print mode="body"/>`)
with `<print mode="body" geometry="center"/>` and drop the following
two lines (recurse and second print). This will give you the
coordinates for all nodes and an approximate* centroid for all ways
and relations in the result set. Example:

* Because of performance reasons, this "center" coordinate is not an
exact centroid, but simply the middle of the bounding box around the


On Sat, Oct 18, 2014 at 3:54 PM, Stefan Keller <sfkeller <at>> wrote:
> Hi,
> A typical query gives all 1. nodes, 2. ways and 3. areas with
> "tourism=zoo” (see below).
> But I'd like to get back only point geometries which consist of 1.
> nodes together with (union) 2. centroids calculated on the fly from
> areas. I coined this a "POI query".
> Possible?
> Yours, S.
> <!--
> This has been generated by the overpass-turbo wizard.
> Get all nodes, ways and areas with "tourism=zoo”
> -->
> <osm-script output="json" timeout="25">
>   <!-- gather results -->
>   <union>
>     <!-- query part for: “tourism=zoo” -->
>     <query type="node">
>       <has-kv k="tourism" v="zoo"/>
>       <bbox-query {{bbox}}/>
>     </query>
>     <query type="way">
>       <has-kv k="tourism" v="zoo"/>
>       <bbox-query {{bbox}}/>
>     </query>
>     <query type="relation">
>       <has-kv k="tourism" v="zoo"/>
>       <bbox-query {{bbox}}/>
>     </query>
>   </union>
>   <!-- print results -->
>   <print mode="body"/>
>   <recurse type="down"/>
>   <print mode="skeleton" order="quadtile"/>
> </osm-script>
> _______________________________________________
> dev mailing list
> dev <at>

dev mailing list
dev <at>
Pierre Béland | 18 Oct 17:15 2014

[OSM-dev] IRC webpage

The irc webpage facilitates access by new users. Interesting option.

I dont know who takes care of this. Would you please add to it #osm-ht


dev mailing list
dev <at>
Grant Slater | 16 Oct 00:53 2014

[OSM-dev] OSM Operations Challenge: Tile CDN QoS

Hi All,

I am part of the OpenStreetMap sysadmin team...

I am involved with running of the CDN.

Currently we have 2 perpetually overloaded rendering servers (orm & yevaud)
The render servers are fronted by a collection of caching servers
distributed all over the globe. See:
Clients are directed to the cache servers by "Geo" DNS. (PowerDNS geo backend)
We use squid 2.7 (ancient) for caching with squid delay pool (Per IP
token bucket) to slow down mass-downlowders / abusers who would
normally degrade the service for everyone.

Squid delays pools is basic token bucket implementation... Each client
IP is allocated a bucket, the client's download rate drains the
bucket... bucket is topped by at a slow rate. Once bucket is drained a
client IP cannot exceed the top-up rate.

I am working on a rewrite of the CDN caching layer to use modern
varnish cache (4.0).

I have also been looking at moving the token bucket abuse management
to linux QoS.

I'm working toward something like:

1) varnish + libvmod-vsthrottle ( ) triggers a log event for
a client with an excessive request rate.
2) log monitor fires off tc to switch client to a rate limited tc qdisc.
3) After x minutes log monitor resets client back to default qdisc.

Basic tc script example:

Anyone with deep knowledge of varnish and/or linux QoS?
Any suggestions? Tips?

Happy to discuss here on list or on IRC Firefishy in #osm-dev (oftc)

Kind regards,


dev mailing list
dev <at>
Sachin Dole | 15 Oct 15:03 2014

[OSM-dev] how to map new subdivision in suburban united states?

I just made an edit here:

I dont know if I tagged it correctly! 

I intend to update this feature as time goes by, so wanted to find out how to do it correctly at the outset. please help here or point me to a doc I can read about how to pick correct tags.

Thank you!

President  |  Genvega Inc.  |  |  tel: 630.290.2561
dev mailing list
dev <at>
Michael Kugelmann | 15 Oct 01:50 2014

[OSM-dev] known problems at field paper? (stamen)


in our local "bavaria mailing list" a mapper showed up due to problems with "field paper" (stamen).

Are there any known issues at the moment?

Problem description:
all the download etc worked fine. But the upload does not work. The mapper tried multiple scaning resolutions etc, "nothing" works.
I tried it with a scan (PNG) provided by the mapper. But the only thing I got was an error message:

Giving up.

You might try uploading your scan again, making sure that it’s at a reasonably high resolution (200+ dpi for a full sheet of paper is normal) and right-side up. A legible QR code is critical. If this doesn’t help, let us know.

Sor me it looks like that the upload does not start at all...

For my try I used firefox (latest release).


PS: sorry for crosspost...

dev mailing list
dev <at>
Ilya Zverev | 13 Oct 09:35 2014


Paul Norman wrote:
> On 10/12/2014 2:49 PM, Ilya Zverev wrote:
>> Hi! Yesterday I had a simple task: there is a rendering server, which
>> is not to be minutely updated. So its PostgreSQL database doesn't need
>> temporary "slim" tables. But I cannot import an extract without
>> creating those, so basically I can import 3 times less data than
>> possible.
> It's worth noting that the --drop mode in osm2pgsql will delete the slim
> tables when done, and also not create the indexes on those tables. As
> those indexes are large, this is a substantial space savings in itself.
> --drop also will create more compact indexes for the rendering tables.

Oh. I forgot to check osm2pgsql options and missed --drop. Now it's been 
added as an extra mode "loadc". It is ~7% faster than "load"+"clean", 
and produces an sql dump of nearly the same size. Thanks.


dev mailing list
dev <at>