다른 사이트에서 스크랩..
시스템내 중복되어있는 파일을 정리하는데 가장쉽고 편리한 방법이 아닐까 합니다.
정규표현식 써가면서 파일싸이즈 비교해서 뽑을 필요없이 아래 패키지 설치하나로 간단히 해결…
참고
https://code.google.com/p/fdupes/
https://github.com/adrianlopezroche/fdupes
-r 옵션은 하위디렉토리 까지 모두 검색
-S 중복파일 사이즈 출력
fdupes has a rich CLI:
fdupes -r ./stuff > dupes.txt
Then, deleting the duplicates was as easy as checking dupes.txt and deleting the offending directories. fdupes also can prompt you to delete the duplicates as you go along.
fdupes -r /home/user > /home/user/duplicate.txt
Output of the command goes in duplicate.txt.
fdupes will compare the size and MD5 hash of the files to find duplicates.
Unix: How to delete files listed in a file
This is not very efficient, but will work if you need glob patterns (as in /var/www/*)
for f in $(cat 1.txt) ; do
rm $f
done
If you don’t have any patterns, you can use xargs like so:
xargs rm < 1.txt