Hello ... Using Win2K (and reportedly WinXP), when the length of the 'url' string
= 280 characters, a segmentation fault occurs.
This doesn't seem to be affecting unix machines. Thanks -J
6 messages · Jeff Gentry, Brian Ripley
Hello ... Using Win2K (and reportedly WinXP), when the length of the 'url' string
= 280 characters, a segmentation fault occurs.
This doesn't seem to be affecting unix machines. Thanks -J
With what settings of argument `browser'? If this is using file associations there is a Windows internal limit (it designed for files and there is a 264 char path limit on files), so I don't think you can entirely avoid this, although we can avoid the segfault. If you absolutely must have such long URLs, try specifying a browser (although you will probably find limits that way too). As the FAQ asks, we do need a reproducible example to check the fixes.
Using Win2K (and reportedly WinXP), when the length of the 'url' string
= 280 characters, a segmentation fault occurs.
This doesn't seem to be affecting unix machines.
It's completely independent code between Unix and Windows.
Brian D. Ripley, ripley@stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
With what settings of argument `browser'?
In this case, none.
getOption("browser") returns NULL. When browseURL() is working form e
under that configuration it pulls up IE5
If you absolutely must have such long URLs, try specifying a browser (although you will probably find limits that way too).
When I tried specifying 'browser="explorer"', I still get errors, although not segfaults. The error seems to be dependent on exactly what the URL string is, and is always fairly odd (as if Windows is pushing the extra bits off into other commands).
As the FAQ asks, we do need a reproducible example to check the fixes.
Well, the toy example I was using to first verify that it was coming from
browseURL in general was just to do this:
z <- rep("z", 300)
z <- paste(z, collapse="")
browseURL(z)
While that URL obviously won't work, note that if you make it of a smaller
length that instead of a segfault you should just get the error that the
URL doesn't seem to exist.
The other example I was using (a 'real' example) requires the use of the
annotate and hgu95av2 packages from Bioconductor (and was supplied by
James MacDonald):
library(annotate)
data(eset)
gn <- geneNames(eset)[453]
gn <- getPMID(gn,"hgu95av2")
gn <- unlist(gn, use.names=FALSE)
pubmed(gn, disp="browser")
This builds up a URL query and then calls 'browseURL(query)'.
-J
With what settings of argument `browser'?
In this case, none.
getOption("browser") returns NULL. When browseURL() is working form e
under that configuration it pulls up IE5
If you absolutely must have such long URLs, try specifying a browser (although you will probably find limits that way too).
When I tried specifying 'browser="explorer"', I still get errors, although not segfaults. The error seems to be dependent on exactly what the URL string is, and is always fairly odd (as if Windows is pushing the extra bits off into other commands).
As the FAQ asks, we do need a reproducible example to check the fixes.
Well, the toy example I was using to first verify that it was coming from
browseURL in general was just to do this:
z <- rep("z", 300)
z <- paste(z, collapse="")
browseURL(z)
While that URL obviously won't work, note that if you make it of a smaller
length that instead of a segfault you should just get the error that the
URL doesn't seem to exist.
That's not a URL at all, and I get nothing (as I should). If I put http:// in front it works (as a search item).
The other example I was using (a 'real' example) requires the use of the annotate and hgu95av2 packages from Bioconductor (and was supplied by James MacDonald): library(annotate) data(eset) gn <- geneNames(eset)[453] gn <- getPMID(gn,"hgu95av2") gn <- unlist(gn, use.names=FALSE) pubmed(gn, disp="browser") This builds up a URL query and then calls 'browseURL(query)'.
And you could extract `query' and tell us what that is ....
Brian D. Ripley, ripley@stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
Well, the toy example I was using to first verify that it was coming from
browseURL in general was just to do this:
z <- rep("z", 300)
z <- paste(z, collapse="")
browseURL(z)
That's not a URL at all, and I get nothing (as I should). If I put http:// in front it works (as a search item).
That's not the point. When it is repped to length 300, it causes a
segfault for me. When it is repped to a length of say, 250 - it simply
doesn't work properly (as one would expect, because as you so correctly
pointed out 'aaaaaaa....' isn't a URL. My point here was to demonstrate
the segfaulting due to excessively long strings, which at least for me,
does not seem to be tied to a URL being valid or not.
Here:
z <- paste("http://www.r-project.org/", paste(rep("a", 200),
collapse=""))
browseURL(z)
This gives an error that the URL does not exist.
z <- paste("http://www.r-project.org/", paste(rep("a", 300),
collapse=""))
browseURL(z)
This causes a segfault.
This builds up a URL query and then calls 'browseURL(query)'.
And you could extract `query' and tell us what that is ....
[1] "http://www.ncbi.nih.gov/entrez/query.fcgi?tool=bioconductor&cmd=Retrieve&db=PubMed&list_uids=12730033%2c12691826%2c12544996%2c12490434%2c12477932%2c12411538%2c12391142%2c12207910%2c11971973%2c11864979%2c10859165%2c10216320%2c10205060%2c3931075%2c3470951%2c3019832%2c2880793%2c2858050%2c2538825%2c1700760%2c1478667"
Well, the toy example I was using to first verify that it was coming from
browseURL in general was just to do this:
z <- rep("z", 300)
z <- paste(z, collapse="")
browseURL(z)
That's not a URL at all, and I get nothing (as I should). If I put http:// in front it works (as a search item).
That's not the point. When it is repped to length 300, it causes a
Yes, it _is_ the point: I do not get a segfault on that example.
segfault for me. When it is repped to a length of say, 250 - it simply
doesn't work properly (as one would expect, because as you so correctly
pointed out 'aaaaaaa....' isn't a URL. My point here was to demonstrate
the segfaulting due to excessively long strings, which at least for me,
does not seem to be tied to a URL being valid or not.
Here:
z <- paste("http://www.r-project.org/", paste(rep("a", 200),
collapse=""))
browseURL(z)
This gives an error that the URL does not exist.
z <- paste("http://www.r-project.org/", paste(rep("a", 300),
collapse=""))
browseURL(z)
This causes a segfault.
This builds up a URL query and then calls 'browseURL(query)'.
And you could extract `query' and tell us what that is ....
[1] "http://www.ncbi.nih.gov/entrez/query.fcgi?tool=bioconductor&cmd=Retrieve&db=PubMed&list_uids=12730033%2c12691826%2c12544996%2c12490434%2c12477932%2c12411538%2c12391142%2c12207910%2c11971973%2c11864979%2c10859165%2c10216320%2c10205060%2c3931075%2c3470951%2c3019832%2c2880793%2c2858050%2c2538825%2c1700760%2c1478667"
At last, thank you. Yes, that segfaults in 1.8.1 but works in the current 1.9.0 cvs sources, where what I guessed to be the limit has been removed.
Brian D. Ripley, ripley@stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595