Get the unique lines of second file in result of comparing two files

I have two text files, and I want to read file1 line by line, searching for the same line in file2 and removing it from file2.

I have the pseudocode of:

for line in file1.txt
do sed search line and delete in file2.txt
done
1

3 Answers

You could accomplish this with grep.

Here is an example:

$ echo localhost > local_hosts
$ grep -v -f local_hosts /etc/hosts
127.0.1.1 ubuntu
# The following lines are desirable for IPv6 capable hosts
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
1

Generally you want to keep the lines in file2 that are not in file1 actually.

There are more possibilities of these,

comm <(sort file1) <(sort file2) -23

via join

join -v 1 <(sort file1) <(sort file2)

or via AWK which doesn't need to sort the files:

awk 'NR==FNR{lines[$0];next} !($0 in lines)' file2 file1
2

I found a way to do this through some more internet searches. And using just grep too without needing to sort the file.

grep -Fvxf file2 file1

This will display the new information on screen which presents a problem since I wanted to remove what was in file2 from file1 and have a new file1. Since the above code worked, I just had to add to it to get what I wanted.

grep -Fvxf file2 file1 > tempfile && mv tempfile file1

That solves my problems. Maybe not the best way but it does work.

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy

You Might Also Like