Twitter text compression challenge
Rules
When encoding :
Latin1
text, presumably English. Your program must output a message which can be represented in
140 code points in the range U+0000
– U+10FFFF
Excluding non-characters:
U+FFFE
U+FFFF
U+
n
FFFE
, U+
n
FFFF
where n
is 1
– 10
hexadecimal U+FDD0
– U+FDEF
U+D800
– U+DFFF
(surrogate code points). It may be output in any reasonable encoding of your choice; any encoding supported by GNU iconv
will be considered reasonable, and your platform native encoding or locale encoding would likely be a good choice.
When decoding :
The output text should be readable by a human, again presumably English.
encode
or decode
to set the mode. my-program encode <input.txt >output.utf
my-program decode <output.utf >output.txt
my-program encode input.txt output.utf
my-program decode output.utf output.txt
The rules are a variation on the rules for Twitter image encoding challenge .
Not sure if I'll have the time/energy to follow this up with actual code, but here's my idea:
Anything longer than that, and we're starting lose information in the text. So execute the minimum number of the following steps to reduce the string to a length that can then be compressed/encoded using the above methods. Also, don't perform these replacements on the entire string if just performing them on a substring will make it short enough (I would probably walk through the string backwards).
Ok, so now we've eliminated as many excess characters as we can reasonably get rid of. Now we're going to do some more dramatic reductions:
Ok, that's about as far as we can go and have the text be readable. Beyond this, lets see if we can come up with a method so that the text will resemble the original, even if it isn't ultimately deciperable (again, perform this one character at a time from the end of the string, and stop when it is short enough):
This should leave us with a string consisting of exactly 5 possible values (a, l, n, p, and space), which should allow us to encode pretty lengthy strings.
Beyond that, we'd simply have to truncate.
Only other technique I can think of would be to do dictionary-based encoding, for common words or groups of letters. This might give us some benefit for proper sentences, but probably not for arbitrary strings.
Here is my variant for actual English.
Each code point have something like 1100000 possible states. Well, that's a lot of space.
So, we stem all original text and get Wordnet synsets from it. Numbers are cast into english names ("fourty two"). 1,1M states will allow us to hold synset id (which can be between 0 and 82114), position inside synset(~10 variants, i suppose) and synset type (which is one of four - noun, verb, adjective, adverb). We even may have enough space to store original form of word (like verb tense id).
Decoder just feeds synsets to Wordnet and retrieves corresponding words.
Source text:
A white dwarf is a small star composed mostly of electron-degenerate matter. Because a
white dwarf's mass is comparable to that of the Sun and its volume is comparable to that
of the Earth, it is very dense.
Becomes:
A white dwarf be small star composed mostly electron degenerate matter because white
dwarf mass be comparable sun IT volume be comparable earth IT be very dense
(tested with Online Wordnet). This "code" should take 27 code points. Ofcourse all "gibberish" like 'lol' and 'L33T' will be lost forever.
PAQ8O10T << FTW
链接地址: http://www.djcxy.com/p/42742.html下一篇: Twitter文本压缩挑战