Show or ignore duplicate rows.
uniq [OPTION]... [INPUT [OUTPUT]]
-c, --count Increase the number of repetitions at the beginning of each line.
-d, --repeated All adjacent repeated lines are printed only once.
-D All adjacent duplicate lines will be printed.
--all-repeated[=METHOD] Like -D, but allows each group to be separated by a blank line. The METHOD value range is {none (default), prepend, separate}.
-f, --skip-fields=N Skip comparison of the first N columns.
--group[=METHOD] displays all lines, allowing groups to be separated by blank lines. METHOD value range: {separate (default), prepend, append, both}.
-i, --ignore-case Ignore differences in case.
-s, --skip-chars=N Skip comparison of the first N characters.
-u, --unique Print only non-contiguous duplicate lines.
-z, --zero-terminated Set the line terminator to NUL (empty) instead of newline.
-w, --check-chars=N Compare only the first N characters of each line.
--help Display help information and exit.
--version Display version information and exit.
INPUT (optional): input file, standard input if not provided.
OUTPUT (optional): Output file, standard output if not provided.
Returning 0 indicates success, returning a non-zero value indicates failure.
Note: The results of command 2 and command 3 are the same. Command 1 only deduplicates adjacent lines.
uniq file.txt
sort file.txt | uniq
sort -u file.txt
Only a single row is displayed, the difference is whether sorting is performed:
uniq -u file.txt
sort file.txt | uniq -u
Count the number of times each line appears in the file:
sort file.txt | uniq -c
Find duplicate lines in a file:
sort file.txt | uniq -d