Tim Richter-Heitmann | 16 Sep 15:16 2014

(residual) variograms and trend adjustment


I just had a talk with a reviewer of my thesis. He strongly criticized 
the way i was interpreting variograms without previous trend adjustment.
In fact, i am following the route in "Applied Spatial Data Analysis with 
R", chapter 8. On page 218, i think its stated the basic assumption of 
the variogram is that the variance of the random function Z is solely
based on the separation distances. However, it was said that variograms 
should only display the residuals of Z (y=mx+b+e), never the complete 
term. I first should identify non-spatial trends in the data.
I find this very difficult without knowing the autocorrelation of a 
dataset. So, the real question is, what comes first: Autocorrelation or 
pairwise correlations of observations?
Starting with page 230 of the same book, there is a tutorial for 
residual variogramming, but i think this assumes that i already know the 
functions i want to test, or?
How would you deal with this remark?

Thanks, Tim
Agustin Diez Castillo | 16 Sep 14:11 2014

really slow plot of Portugal

I dont know if this is related with the crisis, but Switzerland works wether Portugal or Spain take forever
and even freezes R. Same with spplot. Any clues? 
# Switzerland
con <- url("http://gadm.org/data/rda/CHE_adm0.RData")
[1] "gadm"
  user  system elapsed 
 0.026   0.001   0.028 
  user  system elapsed 
 0.022   0.000   0.022 

#Portugal, sometimes the session got halted for a while, even with the system claiming that R is not
responding (in mac)
con <- url("http://gadm.org/data/rda/PRT_adm0.RData")
[1] "gadm"
  user  system elapsed 
556.032   1.133 565.243

con <- url("http://gadm.org/data/rda/ESP_adm0.RData")
(Continue reading)

Chris Clements | 16 Sep 10:32 2014

Wrap around distribution

Dear all,

I am currently plotting very simple marine species distributions based on presence data using
gConvexhull. The species I am working on is large and cosmopolitan, so I would like to be able to plot a
distribution that wraps around the world, but in a 2d. So it the below example, the polygon would extend
east and west to the edge of the world map:


sightings <- gConvexHull(readWKT("GEOMETRYCOLLECTION(POINT(-120  -45), POINT(20  0), POINT(0  30),
POINT(150 -50), POINT(-160  20), POINT(60 -10), POINT(145  -5))"))

plot(sightings, col="lightblue", lty=0, xlim=c(-180, 180))

plot(wrld_simpl, add=T, col="black")

Any help much appreciated.


Dr Christopher Clements

The University of Zrich,

Research: www.chrisclementsresearch.co.uk
(Continue reading)

Hodgess, Erin | 15 Sep 20:10 2014

krigeST question (yet again!)

Hello r-Sig-Geo-ers!

I have a goofy question (surprise!), please:

When we use krigeST, we can predict spatially in many locations other than the original data locations.  Can
we also predict temporally farther out, please?  For instance, say my data set runs from Jan 2014 - Aug 2014,
and I want to predict from Sep - Oct 2014.  Could I set that up in in my prediction set (such that it would be
"reasonable"), please?




	[[alternative HTML version deleted]]
frankma | 14 Sep 16:58 2014

SpatioTemporal package error

Dear all,

I'm using the SpatioTemporal package
which is fantastic for spatio-temporal analysis however I've come across
hurdle that I cannot get over at the moment. Any help that anyone may be
able to give would be greatly appreciated.

I am afraid that my ignorance is likely causing a simple error, however, I
have not been able to resolve it.

Whilst estimating parameters I use:

model.dim <- loglikeSTdim(mesa.model)

and return:

List of 12
 $ T              : int 48
 $ m              : int 2
 $ n              : int 26
 $ n.obs          : int 26
 $ p              : Named int [1:2] 4 2
  ..- attr(*, "names")= chr [1:2] "const" "V1"
 $ L              : int 1
 $ npars.beta.covf: Named int [1:2] 2 2
  ..- attr(*, "names")= chr [1:2] "exp" "exp"
 $ npars.beta.tot : Named int [1:2] 2 2
  ..- attr(*, "names")= chr [1:2] "exp" "exp"
(Continue reading)

Barry Rowlingson | 14 Sep 12:28 2014

Re: Intended usage of gIntersection ?

I think we should merge and unify sp, rgdal, rgeos and raster...

 > require(gis)


On Sat, Sep 13, 2014 at 8:58 PM, Bernd Vogelgesang
<bernd.vogelgesang <at> gmx.de> wrote:
> Hi Robert,
> GREAT! It works!
> I think you saved my week(end).
> Would have never guessed that the raster package will do such things, so I
> completely avoided to search in such a direction.
> Maybe I will learn to do the trick with gIntersection one day, but for
> today, I'm on the winner street again with raster intersect, Crazy!
> Cheers
> Bernd
> Am 13.09.2014, 19:44 Uhr, schrieb Robert J. Hijmans <r.hijmans <at> gmail.com>:
>> The raster package has a few functions that extend rgeos by also
>> attempting to handle attribute data as well. In this case, see
>> raster::intersect
>> And the list of functions here:
>> ?"raster-package"
>> (section XIV)
(Continue reading)

DAlcaraz | 14 Sep 11:36 2014

Does plotKML handle skewed diverging continuous raster?

First of all, thank you very much for your great job with the plotKML
package for R. It is simply GREAT!!!
However, I've been fighting during one week with this issue and I wonder
whether the package still does not handle it.
How can a plot a kml from a raster whose values show deviations from 0, but
follow an skewed distribution? 
Ideally, I would like negative values in reds, positive values in blues, and
zero values in grey.

Thank you very much in advance for your help.

PS: I've pasted below a trivial example showing how the "plot" function can
handle this issue but the plotKML does not.

install.packages("raster", dep=T)
install.packages("RColorBrewer", dep=T)
install.packages("plotKML", dep=T)

r<- raster(ncol=5,nrow=2)
values(r) <- c(-5,-4,-1,0,0,3,-2,-6,-6,-6) #Positive and Negative changes
hist(r, main="Diverging skewed distribution of raster data")

DivColorBreaks <- c(-6,-3,-0.1,0.1,3,6)# 
NDivBreaks <- length(DivColorBreaks)-1 
(Continue reading)

Bernd Vogelgesang | 13 Sep 19:04 2014

Intended usage of gIntersection ?

Dear list,

I'm trying to intersect two SpatialPolygonsDataFrames imported from shape  
files. (readOGR)
The idea is to get the new polygons + the merged attributes of both  
layers, but the outcome is a SpatialPolygon class without a data slot.

Is this the intended behaviour of gIntersection or is there something  
broken in my R-Installation, cause I googled now for 2 days to find a  
solution on that problem, but only found two hits on stackexchange where  
people had similar problems, and no working solution (at least for me),  
nor was I able to find any standard recipt how to join the attributes back  
to the new polygons.

So obviously only rare people seem to have the need to get also the data  
 from an intersection or do I miss some really basic R-capabilities not  
worth to write down in any documentation??

Hope someone can shed some light


Bernd Vogelgesang
Siedlerstraße 2
91083 Baiersdorf/Igelsdorf
Tel: 09133-825374

R-sig-Geo mailing list
(Continue reading)

Anthony Fischbach | 12 Sep 20:14 2014

plotKML organizing folders

I wish to allow users of my kml to select specific classes of entities by
folders within the virtual globe display.
For example (using the standard plotKML dataset) I wish to allow users to
select 'class A' and 'class B' bigfoot sightings by selecting folders that
have intuitive names.

## Toy example:
data(bigfoot) ## Load standard dataframe 
bigfootA<-head(bigfoot) ## grab the top of the dataframe, which has class A
bigfootB<-tail(bigfoot)  ## grab the bottom of the dataframe, which has
class B sightings
## cast both dataframes into spatialPointsDataFrames with defined coordinate
reference systems
coordinates(bigfootA)<-c('Lon','Lat') ## Cast as spatial points dataframe
proj4string(bigfootA)<-CRS("+proj=longlat +ellps=WGS84 +datum=WGS84
+no_defs")  ## assign a coordinate reference system
coordinates(bigfootB)<-c('Lon','Lat') ## Cast as spatial points dataframe
proj4string(bigfootB)<-CRS("+proj=longlat +ellps=WGS84 +datum=WGS84
+no_defs")  ## assign a coordinate reference system

kml_open(file.name='BigWithFolders.kml', folder.name = 'Big Foot',
kml_visibility=TRUE )
	## Build the points by each class
		kml_layer.SpatialPoints(obj=bigfootA, points_names=bigfootA <at> data$NAMES, 
			colour='green',  LabelScale=0.8, 
			alpha=0.6, balloon=TRUE) ##  
(Continue reading)

Andreas Forø Tollefsen | 12 Sep 09:35 2014

knearneigh: data non-numeric

Hi all,

I have never experienced this issue before, but I assume there is something
in my spatialpolygondataframe causing this.
Whenever I try to create a knn object using these data, I get the error
"knearneigh: data non-numeric".

The code used:
knearneigh(gadmsimpl4_poly,k = 4,longlat = FALSE)
Error in knearneigh(gadmsimpl4_poly, k = 4, longlat = FALSE) :
knearneigh: data non-numeric

Any ideas what might be causing this error? I cannot remember seeing this


> class(gadmsimpl3_poly)
[1] "SpatialPolygonsDataFrame"
[1] "sp"
> length(colnames(gadmsimpl3_poly <at> data))
[1] 84
> length(gadmsimpl3_poly)
[1] 1095
> gadmsimpl3_poly <at> bbox
        min      max
x -25.36181 50.48654
y -34.63487 25.00001
(Continue reading)

Andrew Vitale | 12 Sep 01:01 2014

Create raster layer that counts SpatialPoint occurrence within cells

Hello all,

I have a Python script that searches Twitter for tweets that meet certain
criteria and records the coordinates of those tweets.  This leaves me with
a CSV that can then easily be read into R as a SpatialPoints object.

Once I have the SpatialPoints, I would like to make a raster layer that has
the count of tweet locations that occur within each cell of that raster

I have a working script, but I feel like my way of doing this is very
inefficient.  Does anyone know a better way to produce a map similar to
this?  I'm also interested in other suggestions of how to visualize these

The example below shows my approach to this problem.  I believe the weak
point of the code is my for loop.  I would like to increase efficiency, as
I plan to use this approach on much larger datasets.  Thanks for your

-Andrew Vitale

The example:


## Create a set of random long/lat coordinates
## that are centred around a "metro" area.
## These data mimic the CSV of tweet coords I read in to R
n = 500
(Continue reading)