块语法不读取逗号
from nltk.chunk.util import tagstr2tree
from nltk import word_tokenize, pos_tag
text = "John Rose Center is very beautiful place and i want to go there with Barbara Palvin. Also there are stores like Adidas ,Nike ,Reebok Center."
tagged_text = pos_tag(text.split())
grammar = "NP:{<NNP>+}"
cp = nltk.RegexpParser(grammar)
result = cp.parse(tagged_text)
print(result)
输出:
(S
(NP John/NNP Rose/NNP Center/NNP)
is/VBZ
very/RB
beautiful/JJ
place/NN
and/CC
i/NN
want/VBP
to/TO
go/VB
there/RB
with/IN
(NP Barbara/NNP Palvin./NNP)
Also/RB
there/EX
are/VBP
stores/NNS
like/IN
(NP Adidas/NNP ,Nike/NNP ,Reebok/NNP Center./NNP))
我用来分块的语法只适用于nnp标签,但是如果单词是用逗号连续的,它们仍然会在同一行。我希望我的块像这样:
(S
(NP John/NNP Rose/NNP Center/NNP)
is/VBZ
very/RB
beautiful/JJ
place/NN
and/CC
i/NN
want/VBP
to/TO
go/VB
there/RB
with/IN
(NP Barbara/NNP Palvin./NNP)
Also/RB
there/EX
are/VBP
stores/NNS
like/IN
(NP Adidas,/NNP)
(NP Nike,/NNP)
(NP Reebok/NNP Center./NNP))
我应该在“语法=”中写什么,或者我可以像上面写的那样编辑输出?正如你所看到的,我只为我的命名实体项目解析专有名词请帮助我。
使用word_tokenize(string)
而不是string.split()
:
>>> import nltk
>>> from nltk.chunk.util import tagstr2tree
>>> from nltk import word_tokenize, pos_tag
>>> text = "John Rose Center is very beautiful place and i want to go there with Barbara Palvin. Also there are stores like Adidas ,Nike ,Reebok Center."
>>> tagged_text = pos_tag(word_tokenize(text))
>>>
>>> grammar = "NP:{<NNP>+}"
>>>
>>> cp = nltk.RegexpParser(grammar)
>>> result = cp.parse(tagged_text)
>>>
>>> print(result)
(S
(NP John/NNP Rose/NNP Center/NNP)
is/VBZ
very/RB
beautiful/JJ
place/NN
and/CC
i/NN
want/VBP
to/TO
go/VB
there/RB
with/IN
(NP Barbara/NNP Palvin/NNP)
./.
Also/RB
there/EX
are/VBP
stores/NNS
like/IN
(NP Adidas/NNP)
,/,
(NP Nike/NNP)
,/,
(NP Reebok/NNP Center/NNP)
./.)
链接地址: http://www.djcxy.com/p/91723.html