Spark/Scala实现推荐系统中的相似度算法(欧几里得距离、皮尔逊相关系数、余弦相似度:附实现代码)

前端之家收集整理的这篇文章主要介绍了Spark/Scala实现推荐系统中的相似度算法(欧几里得距离、皮尔逊相关系数、余弦相似度:附实现代码)前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

推荐系统中,协同过滤算法是应用较多的,具体又主要划分为基于用户和基于物品的协同过滤算法,核心点就是基于"一个人"或"一件物品",根据这个人或物品所具有的属性,比如对于人就是性别、年龄、工作、收入、喜好等,找出与这个人或物品相似的人或物,当然实际处理中参考的因子会复杂的多。

本篇文章不介绍相关数学概念,主要给出常用的相似度算法代码实现,并且同一算法有多种实现方式。

 

欧几里得距离

def euclidean2(v1: Vector,v2: Vector): Double = {
    require(v1.size == v2.size,s"SimilarityAlgorithms:Vector dimensions do not match: Dim(v1)=${v1.size} and Dim(v2)" +
      s"=${v2.size}.")

    val x = v1.toArray
    val y = v2.toArray

    euclidean(x,y)
  }

  def euclidean(x: Array[Double],y: Array[Double]): Double = {
    require(x.length == y.length,s"SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" +
      s"=${y.length}.")

    math.sqrt(x.zip(y).map(p => p._1 - p._2).map(d => d * d).sum)
  }
  
  def euclidean(v1: Vector,v2: Vector): Double = {
    val sqdist = Vectors.sqdist(v1,v2)
    math.sqrt(sqdist)
  }

 

皮尔逊相关系数

def pearsonCorrelationSimilarity(arr1: Array[Double],arr2: Array[Double]): Double = {
    require(arr1.length == arr2.length,s"SimilarityAlgorithms:Array length do not match: Len(x)=${arr1.length} and Len(y)" +
      s"=${arr2.length}.")

    val sum_vec1 = arr1.sum
    val sum_vec2 = arr2.sum

    val square_sum_vec1 = arr1.map(x => x * x).sum
    val square_sum_vec2 = arr2.map(x => x * x).sum

    val zipVec = arr1.zip(arr2)

    val product = zipVec.map(x => x._1 * x._2).sum
    val numerator = product - (sum_vec1 * sum_vec2 / arr1.length)

    val dominator = math.pow((square_sum_vec1 - math.pow(sum_vec1,2) / arr1.length) * (square_sum_vec2 - math.pow(sum_vec2,2) / arr2.length),0.5)
    if (dominator == 0) Double.NaN else numerator / (dominator * 1.0)
  }

 

余弦相似度

/** jblas实现余弦相似度 */
  def cosineSimilarity(v1: DoubleMatrix,v2: DoubleMatrix): Double = {
    require(x.length == y.length,s"SimilarityAlgorithms:Array length do not match: Len(v1)=${x.length} and Len(v2)" +
      s"=${y.length}.")
      
    v1.dot(v2) / (v1.norm2() * v2.norm2())
  }
  
def cosineSimilarity(v1: Vector,s"SimilarityAlgorithms:Vector dimensions do not match: Dim(v1)=${v1.size} and Dim(v2)" +
      s"=${v2.size}.")

    val x = v1.toArray
    val y = v2.toArray

    cosineSimilarity(x,y)
  }

  
  def cosineSimilarity(x: Array[Double],s"SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" +
      s"=${y.length}.")

    val member = x.zip(y).map(d => d._1 * d._2).sum
   
    val temp1 = math.sqrt(x.map(math.pow(_,2)).sum)
    val temp2 = math.sqrt(y.map(math.pow(_,2)).sum)

    val denominator = temp1 * temp2
    if (denominator == 0) Double.NaN else member / (denominator * 1.0)
  }

 

修正余弦相似度

def adjustedCosineSimJblas(x: DoubleMatrix,y: DoubleMatrix): Double = {
    require(x.length == y.length,s"SimilarityAlgorithms:DoubleMatrix length do not match: Len(x)=${x.length} and Len(y)" +
      s"=${y.length}.")

    val avg = (x.sum() + y.sum()) / (x.length + y.length)
    val v1 = x.sub(avg)
    val v2 = y.sub(avg)
    v1.dot(v2) / (v1.norm2() * v2.norm2())
  }

 
  def adjustedCosineSimJblas(x: Array[Double],s"SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" +
      s"=${y.length}.")

    val v1 = new DoubleMatrix(x)
    val v2 = new DoubleMatrix(y)

    adjustedCosineSimJblas(v1,v2)
  }
  
  def adjustedCosineSimilarity(v1: Vector,s"SimilarityAlgorithms:Vector dimensions do not match: Dim(v1)=${v1.size} and Dim(v2)" +
      s"=${v2.size}.")
    val x = v1.toArray
    val y = v2.toArray

    adjustedCosineSimilarity(x,y)
  }

  def adjustedCosineSimilarity(x: Array[Double],s"SimilarityAlgorithms:Array length do not match: Len(x)=${x.length} and Len(y)" +
      s"=${y.length}.")

    val avg = (x.sum + y.sum) / (x.length + y.length)

    val member = x.map(_ - avg).zip(y.map(_ - avg)).map(d => d._1 * d._2).sum

    val temp1 = math.sqrt(x.map(num => math.pow(num - avg,2)).sum)
    val temp2 = math.sqrt(y.map(num => math.pow(num - avg,2)).sum)

    val denominator = temp1 * temp2
    if (denominator == 0) Double.NaN else member / (denominator * 1.0)
  }

 

大家如果在实际业务处理中有相关需求,可以根据实际场景对上述代码进行优化或改造,当然很多算法框架提供的一些算法是对这些相似度算法的封装,底层还是依赖于这一套,也能帮助大家做更好的了解。比如Spark MLlib在KMeans算法实现中,底层对欧几里得距离的计算实现。

 

推荐文章
重要 | Spark分区并行度决定机制
解析SparkStreaming和Kafka集成的两种方式


关注微信公众号:大数据学习与分享获取更对技术干货

原文链接:https://www.f2er.com/scala/991919.html

猜你在找的Scala相关文章