然后插手‘bird’工具,布隆过滤器的内容并没有改变,由于‘bird’和‘fish’刚好拥有沟通的哈希。
最后我们搜查一堆工具('dog', 'fish', 'cat', 'bird', 'duck', 'emu')是不是已经被索引了。功效发明‘duck’返回True,2而‘emu’返回False。由于‘duck’的哈希刚好和‘dog’是一样的。
分词
下面一步我们要实现分词。 分词的目标是要把我们的文本数据支解成可搜刮的最小单位,也就是词。这里我们首要针对英语,由于中文的分词涉及到天然说话处理赏罚,较量伟大,而英文根基只要用标点标记就好了。
下面我们看看分词的代码:
- def major_segments(s):
- """
- Perform major segmenting on a string. Split the string by all of the major
- breaks, and return the set of everything found. The breaks in this implementation
- are single characters, but in Splunk proper they can be multiple characters.
- A set is used because ordering doesn't matter, and duplicates are bad.
- """
- major_breaks = ' '
- last = -1
- results = set()
- # enumerate() will give us (0, s[0]), (1, s[1]), ...
- for idx, ch in enumerate(s):
- if ch in major_breaks:
- segment = s[last+1:idx]
- results.add(segment)
- last = idx
- # The last character may not be a break so always capture
- # the last segment (which may end up being "", but yolo)
- segment = s[last+1:]
- results.add(segment)
- return results
首要支解
首要支解行使空格来分词,现实的分词逻辑中,还会有其余的脱离符。譬喻Splunk的缺省支解符包罗以下这些,用户也可以界说本身的支解符。
- ] < > ( ) { } | ! ; , ' " * s & ? + %21 %26 %2526 %3B %7C %20 %2B %3D -- %2520 %5D %5B %3A %0A %2C %28 %29
- def minor_segments(s):
- """
- Perform minor segmenting on a string. This is like major
- segmenting, except it also captures from the start of the
- input to each break.
- """
- minor_breaks = '_.'
- last = -1
- results = set()
- for idx, ch in enumerate(s):
- if ch in minor_breaks:
- segment = s[last+1:idx]
- results.add(segment)
- segment = s[:idx]
- results.add(segment)
- last = idx
- segment = s[last+1:]
- results.add(segment)
- results.add(s)
- return results
次要支解
次要支解和首要支解的逻辑相同,只是还会把从开始部门到当前支解的功效插手。譬喻“1.2.3.4”的次要支解会有1,2,3,4,1.2,1.2.3
- def segments(event):
- """Simple wrapper around major_segments / minor_segments"""
- results = set()
- for major in major_segments(event):
- for minor in minor_segments(major):
- results.add(minor)
- return results
分词的逻辑就是对文本先举办首要支解,对每一个首要支解在举办次要支解。然后把全部分出来的词返回。
(编辑:湖南网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|