Text editor to open big (giant, huge, large) text files

I mean 100+ MB big; such text files can push the envelope of editors.

I need to look through a large XML file, but cannot if the editor is buggy.

Any suggestions?


The 010Editor on Windows will open GIANT (think 50 GB) files in binary mode and allow you to edit and search the text.

Community wiki:

Suggestions are

  • HTMLPen.com is a free online editor that can open and highlight syntax TB+ files instantly, supports UTF-8, and can run on a modern browser in any OS. (read-only for big files)
  • Liquid Studio Large File Editor Opens and edits TB+ files instantly, supports UTF-8, Unicode, etc. It is free and covered by the community edition (Windows Only).
  • SlickEdit
  • Large Text File Viewer (read only)
  • glogg (read only, read the file directly from disk, handle multi-GB files).
  • HxD hex editor, but good for large files.
  • LogExpert (download) did a swell job for more than 6 GB log files. It is free .
  • UltraEdit can open files of more than 6 GB, but the configuration must be changed for this to be practical (menu Advanced → Configuration → File Handling → Temporary Files → "Open file without temp file...").
  • wxHexEditor can open such files instantly, working on Linux, Windows, MacOSX
  • EmEditor handles very large text files nicely, officially up to 248 GB but up to 900 GB in my experience.
  • Or, if you just want to peek at the start of the file, the Windows built-in more command might be good enough.


    Why are you using editors to just look at a (large) file?

    Under *nix or Cygwin, just use less ("less is more", only better, since you can back up). Searching and navigating under less is very similar to Vim, but there is no swap file and little RAM used.

    There is a native Win32 port of GNU "less". See the comment below.

    Piggybacking off of some of the comments below, Perl's ".." (range flip/flop) operator makes a nice selection mechanism to limit the crud you have to wade through, as well.

    For example:

    $ perl -n -e 'print if ( 1000000 .. 2000000)' humongo.txt | less
    

    (start at line 1 million and stop at line 2 million, sift the output manually in "less")

    $ perl -n -e 'print if ( /interesting regex/ .. /boring regex/)' humongo.txt | less
    

    (start when the "interesting regular expression" finds something, stop when the "boring regular expression" find the end of an interesting block -- may find multiple blocks, sift the output...)

    Finally, 100 MB isn't too big. 3 GB is getting kind of big. I used to work at a print & mail facility that created about 2 % of US first class mail. One of the systems for which I was the tech lead accounted for about 15+ % of the pieces of mail. We had some big files to debug here and there.

    Community Wiki Suggestions:

    Use LogParser to look at the file:

    logparser.exe -i:textline -o:tsv "select Index, Text from 'c:pathtofile.log' where line > 1000 and line < 2000"
    
    logparser.exe -i:textline -o:tsv "select Index, Text from 'c:pathtofile.log' where line like '%pattern%'"
    
    链接地址: http://www.djcxy.com/p/29916.html

    上一篇: 我如何在C#中构建XML?

    下一篇: 文本编辑器打开大(巨大,巨大,大)的文本文件