1. 安装最新版 elasticsearch
参考:https://www.elastic.co/guide/en/elasticsearch/reference/5.2/deb.html
依次执行以下命令
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
sudo apt-get install apt-transport-https
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
sudo apt-get update && sudo apt-get install elasticsearch
查看自己系统用的是 SysV 还是 systemd
ps -p 1
我用的是systemd 执行
sudo /bin/systemctl daemon-reload sudo /bin/systemctl enable elasticsearch.service
2. 启动elasticsearch
sudo service elasticsearch start
3. 检查是否安装成功
curl -XGET 'localhost:9200/?pretty'
如果返回结果类似以下内容,说明安装成功
{ "name" : "Cp8oag6","cluster_name" : "elasticsearch","cluster_uuid" : "AT69_T_DTp-1qgIJlatQqA","version" : { "number" : "5.2.2","build_hash" : "f27399d","build_date" : "2016-03-30T09:51:41.449Z","build_snapshot" : false,"lucene_version" : "6.4.1" },"tagline" : "You Know,for Search" }
4. 配置同义词
elasticsearch5 的默认目录是/etc/elasticsearch,所以在/etc/elasticsearch 目录下创建同义词文件
edit /etc/elasticsearch/analysis/synonyms.txt
输入同义词(每行一组),注意文件必须要是 utf-8 格式,英文逗号
中文,汉语,汉字,2室1厅1卫,2室2厅1卫=>二居室,1室1厅1卫=>一居室,一室 3室2厅1卫,三居室,
其中包含 => 的同义词是在analysis 时只生成 => 后面的词,应该比较省内存,不知道缺点是什么.
5. 安装分词插件
elasticsearch 默认是没有汉语词组的,所以要用这个插件
https://github.com/medcl/elasticsearch-analysis-ik/releases
下载和elasticsearch 匹配的版本,解压缩到elasticsearch 的 plugins 目录下的 ik 文件夹中,比如我的是/usr/share/elasticsearch/plugins/ik
6. 配置自定义词组
我们用到的词语有些是 ik 中已有的,比如 中文/汉语/汉字,有些是没有的,比如 一居室/二居室,没有的就要自己配置了
gedit /usr/share/elasticsearch/plugins/ik/config/custom/mydict.dic
2室1厅1卫 2室2厅1卫 二居室 1室1厅1卫 一居室 3室2厅1卫 三居室
安装/修改插件,需要重启elasticsearch 才会生效
sudo service elasticsearch stop sudo service elasticsearch start
7. 检测自定义分词是否有效
curl -XGET 'http://localhost:9200/gj/_analyze?pretty&analyzer=by_smart' -d '{"text":"一居室"}
8. 创建 index
准备工作已经做好了,现在可以在代码中使用同义词了,这里用 PHP 演示一下
$client = ClientBuilder::create()->build(); // 创建 index $settings = json_decode('{ "analysis": { "analyzer": { "by_smart": { "type": "custom","tokenizer": "ik_smart","filter": [ "by_tfr","by_sfr" ],"char_filter": [ "by_cfr" ] },"by_max_word": { "type": "custom","tokenizer": "ik_max_word","char_filter": [ "by_cfr" ] } },"filter": { "by_tfr": { "type": "stop","stopwords": [ " " ] },"by_sfr": { "type": "synonym","synonyms_path": "analysis/synonyms.txt" } },"char_filter": { "by_cfr": { "type": "mapping","mappings": [ "| => |" ] } } } }'); $mappings = json_decode('{ "_default_":{ "properties": { "shoujia": { "type":"double" } } },"xinfang":{ "_source":{ "enabled":true },"properties":{ "huxing": { "type": "text","index": "analyzed","analyzer": "by_max_word","search_analyzer": "by_smart" } } } }'); $params = [ 'index'=>'gj','body'=>[ 'settings'=>$settings,'mappings'=>$mappings ] ]; $client->indices()->create($params);
8. 名词解释
只用了几天elasticsearch,懂的地方不多,所以这些只是片面的理解.
_index 类似数据库中的 schema 名字
_type 类似数据表
properties 表中的字段
mappings 配置字段的分词规则(比如我们的ik分词),类型(integer,double,string,text等等)
analysis 分词的规则
原文链接:https://www.f2er.com/ubuntu/353977.html