Capturing output of find .
Using find . -print0
find . -print0
seems to be the only safe way of obtaining a list of files in bash due to the possibility of filenames containing spaces, newlines, quotation marks etc.
However, I'm having a hard time actually making find's output useful within bash or with other command line utilities. The only way I have managed to make use of the output is by piping it to perl, and changing perl's IFS to null:
find . -print0 | perl -e '$/=" "; @files=<>; print $#files;'
This example prints the number of files found, avoiding the danger of newlines in filenames corrupting the count, as would occur with:
find . | wc -l
As most command line programs do not support null-delimited input, I figure the best thing would be to capture the output of find . -print0
find . -print0
in a bash array, like I have done in the perl snippet above, and then continue with the task, whatever it may be.
How can I do this?
This doesn't work:
find . -print0 | ( IFS=$' ' ; array=( $( cat ) ) ; echo ${#array[@]} )
A much more general question might be: How can I do useful things with lists of files in bash?
Shamelessly stolen from Greg's BashFAQ:
unset a i
while IFS= read -r -d $' ' file; do
a[i++]="$file" # or however you want to process each file
done < <(find /tmp -type f -print0)
Note that the redirection construct used here ( cmd1 < <(cmd2)
) is similar to, but not quite the same as the more usual pipeline ( cmd2 | cmd1
) -- if the commands are shell builtins (eg while
), the pipeline version executes them in subshells, and any variables they set (eg the array a
) are lost when they exit. cmd1 < <(cmd2)
only runs cmd2 in a subshell, so the array lives past its construction. Warning: this form of redirection is only available in bash, not even bash in sh-emulation mode; you must start your script with #!/bin/bash
.
Also, because the file processing step (in this case, just a[i++]="$file"
, but you might want to do something fancier directly in the loop) has its input redirected, it cannot use any commands that might read from stdin. To avoid this limitation, I tend to use:
unset a i
while IFS= read -r -u3 -d $' ' file; do
a[i++]="$file" # or however you want to process each file
done 3< <(find /tmp -type f -print0)
...which passes the file list via unit 3, rather than stdin.
Maybe you are looking for xargs:
find . -print0 | xargs -r0 do_something_useful
The option -L 1 could be useful for you too, which makes xargs exec do_something_useful with only 1 file argument.
The main problem is, that the delimiter NUL ( ) is useless here, because it isn't possible to assign IFS a NUL-value. So as good programmers we take care, that the input for our program is something it is able to handle.
First we create a little program, which does this part for us:
#!/bin/bash
printf "%s" "$@" | base64
...and call it base64str (don't forget chmod +x)
Second we can now use a simple and straightforward for-loop:
for i in `find -type f -exec base64str '{}' ;`
do
file="`echo -n "$i" | base64 -d`"
# do something with file
done
So the trick is, that a base64-string has no sign which causes trouble for bash - of course a xxd or something similar can also do the job.
链接地址: http://www.djcxy.com/p/25566.html上一篇: 在OS X中设置环境变量?
下一篇: 捕获find的输出。