3

I am trying to remove a string from many text files on one of our servers. The string is identical across all these files and I can run:

grep -r -l 'string'  

to get the file list but I am stuck on how to get the files edited and written out to their original locations again. Sounds like a job for sed but not sure how to handle the output.

Manny T
  • 68

5 Answers5

5

find -type f -print0 | xargs -0 -n 1 sed -i /string/d will do the trick, handling spaces in filenames and arbitrarily nested frufru, since apparently people aren't capable of expanding * on their own.

womble
  • 98,245
3

Here's my script for this sort of thing, which I call remove_line:

#!/usr/bin/perl

use IO::Handle;

my $pat = shift(@ARGV) or
        die("Usage: $0 pattern files\n");
$pat = qr/$pat/;
die("Usage $0 pattern files\n")
        unless @ARGV;

foreach my $file (@ARGV) {
        my $io = new IO::Handle;
        open($io, $file) or
                die("Cannot read $file: $!\n");
        my @file = <$io>;
        close($io);
        foreach my $line (@file) {
                if($line =~ /$pat/) {
                        $line = '';
                        $found = 1;
                        last;
                }
        }
        if($found) {
                open($io, ">$file") or
                        die("Cannot write $file: $!\n");
                print $io @file;
                close($io);
        }
}

So you do remove_line 'string' the files in your list.

Advantages to doing this over using sed are you don't have to worry about the platform-dependent behavior of sed -i and you can use Perl regex for the matching pattern.

chaos
  • 7,533
1

Ugh. I'm not a shell wizard at all, but I'd look at a pipe to xargs and then sed to remove the line with the string in question.

Little bit of Google perusal makes me think that this might make Bob your stepuncle - close enough to get there anyway.

grep -r -l 'string'  | xargs sed '/string/d' 
mfinni
  • 36,892
0

Ummmmmm, this is a perl one-liner, thanks to the lovely -i flag for in-place filtering of input files!!

   perl -ni.bak -e 'print unless /pattern.to.remove/' file1 file2 ...

In context...

% echo -e 'foo\ngoo\nboo' >test
% perl -ni.bak -e 'print unless /goo/' test
% diff test*
--- test 2010-01-06 05:09:13.503334739 -0800
+++ test.bak 2010-01-06 05:08:28.313583066 -0800
@@ -1,2 +1,3 @@
 foo
+goo
 boo

here is the trimmed quick-reference on the perl incantation used...

% perl --help
Usage: perl [switches] [--] [programfile] [arguments]
  -e program        one line of program (several -e's allowed, omit programfile)
  -i[extension]     edit <> files in place (makes backup if extension supplied)
  -n                assume "while (<>) { ... }" loop around program

and for extra credit, you can use touch -r file.bak file to copy the old timestamp to the new file. the inodes will differ, though, and strange things may happen if you have hard links in the mix...check the docs if you're that motivated to cover your tracks... Hmmmmm, what was your application again?

hackvan
  • 11
-1

Don't forget about the -v option in grep which reverses the sense

grep -v -r -l 'string' 

Frok fom grep man page:

-v, --invert-match
Invert the sense of matching, to select non-matching lines.  (-v is specified by POSIX.)

You may be then able to pass that into the find command similar to this

find -name -exec grep -v -r -l 'string' {} \;

And that's getting close to what you want... but of course you'll need to write the result back to the original file...

hookenz
  • 14,848