How to output NLTK chunks to file?
I have this python script where I am using nltk library to parse,tokenize,tag and chunk some lets say random text from the web.
I need to format and write in a file the output of chunked1
, chunked2
, chunked3
. These have type class 'nltk.tree.Tree'
More specifically I need to write only the lines that match the regular expressions chunkGram1
, chunkGram2
, chunkGram3
.
How can i do that?
#! /usr/bin/python2.7
import nltk
import re
import codecs
xstring = ["An electronic library (also referred to as digital library or digital repository) is a focused collection of digital objects that can include text, visual material, audio material, video material, stored as electronic media formats (as opposed to print, micro form, or other media), along with means for organizing, storing, and retrieving the files and media contained in the library collection. Digital libraries can vary immensely in size and scope, and can be maintained by individuals, organizations, or affiliated with established physical library buildings or institutions, or with academic institutions.[1] The electronic content may be stored locally, or accessed remotely via computer networks. An electronic library is a type of information retrieval system."]
def processLanguage():
for item in xstring:
tokenized = nltk.word_tokenize(item)
tagged = nltk.pos_tag(tokenized)
#print tokenized
#print tagged
chunkGram1 = r"""Chunk: {<JJw?>*<NN>}"""
chunkGram2 = r"""Chunk: {<JJw?>*<NNS>}"""
chunkGram3 = r"""Chunk: {<NNPw?>*<NNS>}"""
chunkParser1 = nltk.RegexpParser(chunkGram1)
chunked1 = chunkParser1.parse(tagged)
chunkParser2 = nltk.RegexpParser(chunkGram2)
chunked2 = chunkParser2.parse(tagged)
chunkParser3 = nltk.RegexpParser(chunkGram3)
chunked3 = chunkParser2.parse(tagged)
#print chunked1
#print chunked2
#print chunked3
# with codecs.open('pathtofileoutput.txt', 'w', encoding='utf8') as outfile:
# for i,line in enumerate(chunked1):
# if "JJ" in line:
# outfile.write(line)
# elif "NNP" in line:
# outfile.write(line)
processLanguage()
For the time being when I am trying to run it I get error:
`Traceback (most recent call last):
File "sentdex.py", line 47, in <module>
processLanguage()
File "sentdex.py", line 40, in processLanguage
outfile.write(line)
File "C:Python27libcodecs.py", line 688, in write
return self.writer.write(data)
File "C:Python27libcodecs.py", line 351, in write
data, consumed = self.encode(object, self.errors)
TypeError: coercing to Unicode: need string or buffer, tuple found`
edit: After @Alvas answer I managed to do what I wanted. However now, I would like to know how I could strip all non-ascii characters from a text corpus. example:
#store cleaned file into variable
with open('pathtofile.txt', 'r') as infile:
xstring = infile.readlines()
infile.close
def remove_non_ascii(line):
return ''.join([i if ord(i) < 128 else ' ' for i in line])
for i, line in enumerate(xstring):
line = remove_non_ascii(line)
#tokenize and tag text
def processLanguage():
for item in xstring:
tokenized = nltk.word_tokenize(item)
tagged = nltk.pos_tag(tokenized)
print tokenized
print tagged
processLanguage()
This above is taken from another answer here in S/O. However it doesn't seem to work. What might be wrong? The error I am getting is:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position
not in range(128)
Your code has several problems, though the main culprit is that your for
loop does not modify the contents of the xstring
:
I will address all the issues in your code here:
you cannot write paths like this with single , as
t
will be interpreted as a tabulator, and f
as a linefeed character. You must double them. I know it was an example here, but such confusions often arise:
with open('pathtofile.txt', 'r') as infile:
xstring = infile.readlines()
The following infile.close
line is wrong. It does not call the close method, it does not actually do anything. Furthermore, your file was closed already by the with clause if you see this line in any answer anywhere, please just downvote the answer outright with the comment saying that file.close
is wrong, should be file.close()
.
The following should work, but you need to be aware that it replacing every non-ascii character with ' '
it will break words such as naïve and café
def remove_non_ascii(line):
return ''.join([i if ord(i) < 128 else ' ' for i in line])
But here is the reason why your code fails with an unicode exception: you are not modifying the elements of xstring
at all, that is, you are calculating the line with ascii characters removed, yes, but that is a new value, that is never stored into the list:
for i, line in enumerate(xstring):
line = remove_non_ascii(line)
Instead it should be:
for i, line in enumerate(xstring):
xstring[i] = remove_non_ascii(line)
or my preferred very pythonic:
xstring = [ remove_non_ascii(line) for line in xstring ]
Though these Unicode Errors occur mainly just because you are using Python 2.7 for handling pure Unicode text, something for which recent Python 3 versions are way ahead, thus I'd recommend you that if you are in very beginning with task that you'd upgrade to Python 3.4+ soon.
Firstly, see this video: https://www.youtube.com/watch?v=0Ef9GudbxXY
Now for the proper answer:
import re
import io
from nltk import pos_tag, word_tokenize, sent_tokenize, RegexpParser
xstring = u"An electronic library (also referred to as digital library or digital repository) is a focused collection of digital objects that can include text, visual material, audio material, video material, stored as electronic media formats (as opposed to print, micro form, or other media), along with means for organizing, storing, and retrieving the files and media contained in the library collection. Digital libraries can vary immensely in size and scope, and can be maintained by individuals, organizations, or affiliated with established physical library buildings or institutions, or with academic institutions.[1] The electronic content may be stored locally, or accessed remotely via computer networks. An electronic library is a type of information retrieval system."
chunkGram1 = r"""Chunk: {<JJw?>*<NN>}"""
chunkParser1 = RegexpParser(chunkGram1)
chunked = [chunkParser1.parse(pos_tag(word_tokenize(sent)))
for sent in sent_tokenize(xstring)]
with io.open('outfile', 'w', encoding='utf8') as fout:
for chunk in chunked:
fout.write(str(chunk)+'nn')
[out]:
alvas@ubi:~$ python test2.py
Traceback (most recent call last):
File "test2.py", line 18, in <module>
fout.write(str(chunk)+'nn')
TypeError: must be unicode, not str
alvas@ubi:~$ python3 test2.py
alvas@ubi:~$ head outfile
(S
An/DT
(Chunk electronic/JJ library/NN)
(/:
also/RB
referred/VBD
to/TO
as/IN
(Chunk digital/JJ library/NN)
or/CC
If you have to stick to python2.7:
with io.open('outfile', 'w', encoding='utf8') as fout:
for chunk in chunked:
fout.write(unicode(chunk)+'nn')
[out]:
alvas@ubi:~$ python test2.py
alvas@ubi:~$ head outfile
(S
An/DT
(Chunk electronic/JJ library/NN)
(/:
also/RB
referred/VBD
to/TO
as/IN
(Chunk digital/JJ library/NN)
or/CC
alvas@ubi:~$ python3 test2.py
Traceback (most recent call last):
File "test2.py", line 18, in <module>
fout.write(unicode(chunk)+'nn')
NameError: name 'unicode' is not defined
And strongly recommended if you must stick with py2.7:
from six import text_type
with io.open('outfile', 'w', encoding='utf8') as fout:
for chunk in chunked:
fout.write(text_type(chunk)+'nn')
[out]:
alvas@ubi:~$ python test2.py
alvas@ubi:~$ head outfile
(S
An/DT
(Chunk electronic/JJ library/NN)
(/:
also/RB
referred/VBD
to/TO
as/IN
(Chunk digital/JJ library/NN)
or/CC
alvas@ubi:~$ python3 test2.py
alvas@ubi:~$ head outfile
(S
An/DT
(Chunk electronic/JJ library/NN)
(/:
also/RB
referred/VBD
to/TO
as/IN
(Chunk digital/JJ library/NN)
or/CC
链接地址: http://www.djcxy.com/p/91722.html
上一篇: 块语法不读取逗号
下一篇: 如何输出NLTK块到文件?