我试图刮掉大量的网页,以便以后分析它们.由于URL的数量巨大,我决定使用并行包和
XML.
具体来说,我正在使用XML中的htmlParse()函数,它在与sapply一起使用时工作正常,但在与parSapply一起使用时会生成类HTMLInternalDocument的空对象.
url1<- "http://forums.philosophyforums.com/threads/senses-of-truth-63636.html" url2<- "http://forums.philosophyforums.com/threads/the-limits-of-my-language-impossibly-mean-the-limits-of-my-world-62183.html" url3<- "http://forums.philosophyforums.com/threads/how-language-models-reality-63487.html" myFunction<- function(x){ cl<- makeCluster(getOption("cl.cores",detectCores())) ok<- parSapply(cl=cl,X=x,FUN=htmlParse) return(ok) } urls<- c(url1,url2,url3) #Works output1<- sapply(urls,function(x)htmlParse(x)) str(output1[[1]]) > Classes 'HTMLInternalDocument','HTMLInternalDocument','XMLInternalDocument','XMLAbstractDocument','oldClass' <externalptr> output1[[1]] #Doesn't work myFunction<- function(x){ cl<- makeCluster(getOption("cl.cores",FUN=htmlParse) stopCluster(cl) return(ok) } output2<- myFunction(urls) str(output2[[1]]) > Classes 'HTMLInternalDocument','oldClass' <externalptr> output2[[1]] #empty
谢谢.
您可以使用Rcurl包中的getURIAsynchronous,允许调用者指定多个URI同时下载.
library(RCurl) library(XML) get.asynch <- function(urls){ txt <- getURIAsynchronous(urls) ## this part can be easily parallelized ## I am juste using lapply here as first attempt res <- lapply(txt,function(x){ doc <- htmlParse(x,asText=TRUE) xpathSApply(doc,"/html/body/h2[2]",xmlValue) }) } get.synch <- function(urls){ lapply(urls,function(x){ doc <- htmlParse(x) res2 <- xpathSApply(doc,xmlValue) res2 })}
这里有一些100 url的基准测试,你将解析时间除以2倍.
library(microbenchmark) uris = c("http://www.omegahat.org/RCurl/index.html") urls <- replicate(100,uris) microbenchmark(get.asynch(urls),get.synch(urls),times=1) Unit: seconds expr min lq median uq max neval get.asynch(urls) 22.53783 22.53783 22.53783 22.53783 22.53783 1 get.synch(urls) 39.50615 39.50615 39.50615 39.50615 39.50615 1