Commit 27f91957 authored by Joao Eriberto Mota Filho's avatar Joao Eriberto Mota Filho

Imported Debian patch 1.7-1

parents 836ccbad 525656d7
jdupes 1.7
- Incompatible change: zero-length files no longer duplicates by default
- New -z/--zeromatch option to consider zero-length files as duplicates
- I/O chunk size changed for better performance
- The PROGRAM_NAME variable is now used properly during make
- Program was re-organized into several split C files
jdupes 1.6.2 jdupes 1.6.2
- Fix: version number shown in jdupes -v wasn't updated in 1.6.1 - Fix: version number shown in jdupes -v wasn't updated in 1.6.1
......
...@@ -25,7 +25,7 @@ PREFIX = /usr ...@@ -25,7 +25,7 @@ PREFIX = /usr
##################################################################### #####################################################################
# PROGRAM_NAME determines the installation name and manual page name # PROGRAM_NAME determines the installation name and manual page name
PROGRAM_NAME=jdupes PROGRAM_NAME = jdupes
# BIN_DIR indicates directory where program is to be installed. # BIN_DIR indicates directory where program is to be installed.
# Suggested value is "$(PREFIX)/bin" # Suggested value is "$(PREFIX)/bin"
...@@ -98,20 +98,22 @@ INSTALL_DATA = $(INSTALL) -c -m 0644 ...@@ -98,20 +98,22 @@ INSTALL_DATA = $(INSTALL) -c -m 0644
# to support features not supplied by their vendor. Eg: GNU getopt() # to support features not supplied by their vendor. Eg: GNU getopt()
#ADDITIONAL_OBJECTS += getopt.o #ADDITIONAL_OBJECTS += getopt.o
OBJECT_FILES += jdupes.o jody_hash.o jody_paths.o jody_sort.o string_malloc.o $(ADDITIONAL_OBJECTS) OBJECT_FILES += jdupes.o jody_hash.o jody_paths.o jody_sort.o jody_win_unicode.o string_malloc.o
OBJECT_FILES += act_deletefiles.o act_dedupefiles.o act_linkfiles.o act_printmatches.o act_summarize.o
OBJECT_FILES += $(ADDITIONAL_OBJECTS)
all: jdupes all: jdupes
jdupes: $(OBJECT_FILES) jdupes: $(OBJECT_FILES)
$(CC) $(CFLAGS) $(LDFLAGS) -o jdupes $(OBJECT_FILES) $(CC) $(CFLAGS) $(LDFLAGS) -o $(PROGRAM_NAME) $(OBJECT_FILES)
installdirs: installdirs:
test -d $(DESTDIR)$(BIN_DIR) || $(MKDIR) $(DESTDIR)$(BIN_DIR) test -d $(DESTDIR)$(BIN_DIR) || $(MKDIR) $(DESTDIR)$(BIN_DIR)
test -d $(DESTDIR)$(MAN_DIR) || $(MKDIR) $(DESTDIR)$(MAN_DIR) test -d $(DESTDIR)$(MAN_DIR) || $(MKDIR) $(DESTDIR)$(MAN_DIR)
install: jdupes installdirs install: jdupes installdirs
$(INSTALL_PROGRAM) jdupes $(DESTDIR)$(BIN_DIR)/$(PROGRAM_NAME) $(INSTALL_PROGRAM) $(PROGRAM_NAME) $(DESTDIR)$(BIN_DIR)/$(PROGRAM_NAME)
$(INSTALL_DATA) jdupes.1 $(DESTDIR)$(MAN_DIR)/$(PROGRAM_NAME).$(MAN_EXT) $(INSTALL_DATA) $(PROGRAM_NAME).1 $(DESTDIR)$(MAN_DIR)/$(PROGRAM_NAME).$(MAN_EXT)
clean: clean:
$(RM) $(OBJECT_FILES) jdupes jdupes.exe *~ *.gcno *.gcda *.gcov $(RM) $(OBJECT_FILES) $(PROGRAM_NAME) jdupes.exe *~ *.gcno *.gcda *.gcov
...@@ -87,7 +87,6 @@ Usage: jdupes [options] DIRECTORY... ...@@ -87,7 +87,6 @@ Usage: jdupes [options] DIRECTORY...
-L --linkhard hard link all duplicate files without prompting -L --linkhard hard link all duplicate files without prompting
Windows allows a maximum of 1023 hard links per file Windows allows a maximum of 1023 hard links per file
-m --summarize summarize dupe information -m --summarize summarize dupe information
-n --noempty exclude zero-length files from consideration
-N --noprompt together with --delete, preserve the first file in -N --noprompt together with --delete, preserve the first file in
each set of duplicates and delete the rest without each set of duplicates and delete the rest without
prompting the user prompting the user
...@@ -108,8 +107,12 @@ Usage: jdupes [options] DIRECTORY... ...@@ -108,8 +107,12 @@ Usage: jdupes [options] DIRECTORY...
-x --xsize=SIZE exclude files of size < SIZE bytes from consideration -x --xsize=SIZE exclude files of size < SIZE bytes from consideration
--xsize=+SIZE '+' specified before SIZE, exclude size > SIZE --xsize=+SIZE '+' specified before SIZE, exclude size > SIZE
K/M/G size suffixes can be used (case-insensitive) K/M/G size suffixes can be used (case-insensitive)
-z --zeromatch consider zero-length files to be duplicates
-Z --softabort If the user aborts (i.e. CTRL-C) act on matches so far -Z --softabort If the user aborts (i.e. CTRL-C) act on matches so far
The -n/--noempty option was removed for safety. Matching zero-length files as
duplicates now requires explicit use of the -z/--zeromatch option instead.
Duplicate files are listed together in groups with each file displayed on a Duplicate files are listed together in groups with each file displayed on a
Separate line. The groups are then separated from each other by blank lines. Separate line. The groups are then separated from each other by blank lines.
...@@ -209,6 +212,85 @@ sets after matching finishes without the "only ever appears once" ...@@ -209,6 +212,85 @@ sets after matching finishes without the "only ever appears once"
guarantee. guarantee.
Does jdupes meet the "Good Practice when Deleting Duplicates" by rmlint?
--------------------------------------------------------------------------
Yes. If you've not read this list of cautions, it is available at
http://rmlint.readthedocs.io/en/latest/cautions.html
Here's a breakdown of how jdupes addresses each of the items listed.
"Backup your data"
"Measure twice, cut once"
These guidelines are for the user of duplicate scanning software, not the
software itself. Back up your files regularly. Use jdupes to print a list
of what is found as duplicated and check that list very carefully before
automatically deleting the files.
"Beware of unusual filename characters"
The only character that poses a concern in jdupes is a newline '\n' and
that is only a problem because the duplicate set printer uses them to
separate file names. Actions taken by jdupes are not parsed like a
command line, so spaces and other weird characters in names aren't a
problem. Escaping the names properly if acting on the printed output is a
problem for the user's shell script or other external program.
"Consider safe removal options"
This is also an exercise for the user.
"Traversal Robustness"
jdupes tracks each directory traversed by dev:inode pair to avoid adding
the contents of the same directory twice. This prevents the user from
being able to register all of their files twice by duplicating an entry
on the command line. Symlinked directories are not followed. Files are
renamed to a temporary name before any linking is done and if the link
operation fails they are renamed back to the original name.
"Collision Robustness"
jdupes uses jodyhash for file data hashing. This hash is extremely fast
with a low collision rate, but it still encounters collisions as any hash
function will ("secure" or otherwise) due to the "birthday problem." This
is why jdupes performs a full-file verification before declaring a match.
It is slower than matching on hashes alone, but the birthday problem puts
all data sets larger than the hash at risk of collision, meaning a false
duplicate detection and data loss. The slower completion time is not as
important as data integrity. Checking for a match based on hashes alone
is irresponsible, and using secure hashes like MD5 or the SHA families
is orders of magnitude slower than jodyhash while still suffering from
the risk brought about by the birthday problem. In short, the birthday
problem means that if you have 365 days in a year and 366 people, the
having at least two birthdays on the same day is guaranteed; likewise,
even though SHA512 is a 512-bit (64-byte) wide hash, there are guaranteed
to be at least 256 pairs of data streams that causes a collision once any
of the data streams being hashed for comparison is 65 bytes (520 bits) or
larger.
"Unusual Characters Robustness"
jdupes does not protect the user from putting ASCII control characters in
their file names; they will mangle the output if printed, but they can
still be operated upon by the actions (delete, link, etc.) in jdupes.
"Seek Thrash Robustness"
jdupes uses an I/O chunk size that is optimized for reading as much as
possible from disk at once to take advantage of high sequential read
speeds in traditional rotating media drives while balancing against the
significantly higher rate of CPU cache misses triggered by an excessively
large I/O buffer size. Enlarging the I/O buffer further may allow for
lots of large files to be read with less head seeking, but the CPU cache
misses slow the algorithm down and memory usage increases to hold these
large buffers. jdupes is benchmarked periodically to make sure that the
chosen I/O chunk size is the best compromise for a wide variety of data
sets.
"Memory Usage Robustness"
This is a very subjective concern considering that even a cell phone in
someone's pocket has at least 1GB of RAM, however it still applies in the
embedded device world where 32MB of RAM might be all that you can have.
Even when processing a data set with over a million files, jdupes memory
usage (tested on Linux x86_64 with -O3 optimization) doesn't exceed 2GB.
A low memory mode can be chosen at compile time to reduce overall memory
usage with a small performance penalty.
Contact Information Contact Information
-------------------------------------------------------------------------- --------------------------------------------------------------------------
For all jdupes inquiries, contact Jody Bruchon <jody@jodybruchon.com> For all jdupes inquiries, contact Jody Bruchon <jody@jodybruchon.com>
......
/* BTRFS deduplication of file blocks */
#include "jdupes.h"
#ifdef ENABLE_BTRFS
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <errno.h>
#include <fcntl.h>
#include <sys/ioctl.h>
#include <linux/btrfs.h>
#include "act_dedupefiles.h"
/* Message to append to BTRFS warnings based on write permissions */
static const char *readonly_msg[] = {
"",
" (no write permission)"
};
static char *dedupeerrstr(int err) {
static char buf[256];
buf[sizeof(buf)-1] = '\0';
if (err == BTRFS_SAME_DATA_DIFFERS) {
snprintf(buf, sizeof(buf), "BTRFS_SAME_DATA_DIFFERS (data modified in the meantime?)");
return buf;
} else if (err < 0) {
return strerror(-err);
} else {
snprintf(buf, sizeof(buf), "Unknown error %d", err);
return buf;
}
}
extern void dedupefiles(file_t * restrict files)
{
struct btrfs_ioctl_same_args *same;
char **dupe_filenames; /* maps to same->info indices */
file_t *curfile;
unsigned int n_dupes, max_dupes, cur_info;
unsigned int cur_file = 0, max_files, total_files = 0;
int fd;
int ret, status, readonly;
LOUD(fprintf(stderr, "\nRunning dedupefiles()\n");)
/* Find the largest dupe set, alloc space to hold structs for it */
get_max_dupes(files, &max_dupes, &max_files);
/* Kernel dupe count is a uint16_t so exit if the type's limit is exceeded */
if (max_dupes > 65535) {
fprintf(stderr, "Largest duplicate set (%d) exceeds the 65535-file dedupe limit.\n", max_dupes);
fprintf(stderr, "Ask the program author to add this feature if you really need it. Exiting!\n");
exit(EXIT_FAILURE);
}
same = calloc(sizeof(struct btrfs_ioctl_same_args) +
sizeof(struct btrfs_ioctl_same_extent_info) * max_dupes, 1);
dupe_filenames = malloc(max_dupes * sizeof(char *));
LOUD(fprintf(stderr, "dedupefiles structs: alloc1 size %lu => %p, alloc2 size %lu => %p\n",
sizeof(struct btrfs_ioctl_same_args) + sizeof(struct btrfs_ioctl_same_extent_info) * max_dupes,
(void *)same, max_dupes * sizeof(char *), (void *)dupe_filenames);)
if (!same || !dupe_filenames) oom("dedupefiles() structures");
/* Main dedupe loop */
while (files) {
if (ISFLAG(files->flags, F_HAS_DUPES) && files->size) {
cur_file++;
if (!ISFLAG(flags, F_HIDEPROGRESS)) {
fprintf(stderr, "Dedupe [%u/%u] %u%% \r", cur_file, max_files,
cur_file * 100 / max_files);
}
/* Open each file to be deduplicated */
cur_info = 0;
for (curfile = files->duplicates; curfile; curfile = curfile->duplicates) {
int errno2;
/* Never allow hard links to be passed to dedupe */
if (curfile->device == files->device && curfile->inode == files->inode) {
LOUD(fprintf(stderr, "skipping hard linked file pair: '%s' = '%s'\n", curfile->d_name, files->d_name);)
continue;
}
dupe_filenames[cur_info] = curfile->d_name;
readonly = 0;
if (access(curfile->d_name, W_OK) != 0) readonly = 1;
fd = open(curfile->d_name, O_RDWR);
LOUD(fprintf(stderr, "opening loop: open('%s', O_RDWR) [%d]\n", curfile->d_name, fd);)
/* If read-write open fails, privileged users can dedupe in read-only mode */
if (fd == -1) {
/* Preserve errno in case read-only fallback fails */
LOUD(fprintf(stderr, "opening loop: open('%s', O_RDWR) failed: %s\n", curfile->d_name, strerror(errno));)
errno2 = errno;
fd = open(curfile->d_name, O_RDONLY);
if (fd == -1) {
LOUD(fprintf(stderr, "opening loop: fallback open('%s', O_RDONLY) failed: %s\n", curfile->d_name, strerror(errno));)
fprintf(stderr, "Unable to open '%s': %s%s\n", curfile->d_name,
strerror(errno2), readonly_msg[readonly]);
continue;
}
LOUD(fprintf(stderr, "opening loop: fallback open('%s', O_RDONLY) succeeded\n", curfile->d_name);)
}
same->info[cur_info].fd = fd;
same->info[cur_info].logical_offset = 0;
cur_info++;
total_files++;
}
n_dupes = cur_info;
same->logical_offset = 0;
same->length = (unsigned long)files->size;
same->dest_count = (uint16_t)n_dupes; /* kernel type is __u16 */
fd = open(files->d_name, O_RDONLY);
LOUD(fprintf(stderr, "source: open('%s', O_RDONLY) [%d]\n", files->d_name, fd);)
if (fd == -1) {
fprintf(stderr, "unable to open(\"%s\", O_RDONLY): %s\n", files->d_name, strerror(errno));
goto cleanup;
}
/* Call dedupe ioctl to pass the files to the kernel */
ret = ioctl(fd, BTRFS_IOC_FILE_EXTENT_SAME, same);
LOUD(fprintf(stderr, "dedupe: ioctl('%s' [%d], BTRFS_IOC_FILE_EXTENT_SAME, same) => %d\n", files->d_name, fd, ret);)
if (close(fd) == -1) fprintf(stderr, "Unable to close(\"%s\"): %s\n", files->d_name, strerror(errno));
if (ret < 0) {
fprintf(stderr, "dedupe failed against file '%s' (%d matches): %s\n", files->d_name, n_dupes, strerror(errno));
goto cleanup;
}
for (cur_info = 0; cur_info < n_dupes; cur_info++) {
status = same->info[cur_info].status;
if (status != 0) {
if (same->info[cur_info].bytes_deduped == 0) {
fprintf(stderr, "warning: dedupe failed: %s => %s: %s [%d]%s\n",
files->d_name, dupe_filenames[cur_info], dedupeerrstr(status),
status, readonly_msg[readonly]);
} else {
fprintf(stderr, "warning: dedupe only did %jd bytes: %s => %s: %s [%d]%s\n",
(intmax_t)same->info[cur_info].bytes_deduped, files->d_name,
dupe_filenames[cur_info], dedupeerrstr(status), status, readonly_msg[readonly]);
}
}
}
cleanup:
for (cur_info = 0; cur_info < n_dupes; cur_info++) {
if (close((int)same->info[cur_info].fd) == -1) {
fprintf(stderr, "unable to close(\"%s\"): %s", dupe_filenames[cur_info],
strerror(errno));
}
}
} /* has dupes */
files = files->next;
}
if (!ISFLAG(flags, F_HIDEPROGRESS)) fprintf(stderr, "Deduplication done (%d files processed)\n", total_files);
free(same);
free(dupe_filenames);
return;
}
#endif /* ENABLE_BTRFS */
/* jdupes action for BTRFS block-level deduplication */
#ifndef ACT_DEDUPEFILES_H
#define ACT_DEDUPEFILES_H
#ifdef __cplusplus
extern "C" {
#endif
#include "jdupes.h"
extern void dedupefiles(file_t * restrict files);
#ifdef __cplusplus
}
#endif
#endif /* ACT_DEDUPEFILES_H */
/* Delete duplicate files automatically or interactively */
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include "jdupes.h"
#include "jody_win_unicode.h"
#include "act_deletefiles.h"
extern void deletefiles(file_t *files, int prompt, FILE *tty)
{
unsigned int counter, groups;
unsigned int curgroup = 0;
file_t *tmpfile;
file_t **dupelist;
unsigned int *preserve;
char *preservestr;
char *token;
char *tstr;
unsigned int number, sum, max, x;
size_t i;
groups = get_max_dupes(files, &max, NULL);
max++;
dupelist = (file_t **) malloc(sizeof(file_t*) * max);
preserve = (unsigned int *) malloc(sizeof(int) * max);
preservestr = (char *) malloc(INPUT_SIZE);
if (!dupelist || !preserve || !preservestr) oom("deletefiles() structures");
for (; files; files = files->next) {
if (ISFLAG(files->flags, F_HAS_DUPES)) {
curgroup++;
counter = 1;
dupelist[counter] = files;
if (prompt) {
printf("[%u] ", counter); fwprint(stdout, files->d_name, 1);
}
tmpfile = files->duplicates;
while (tmpfile) {
dupelist[++counter] = tmpfile;
if (prompt) {
printf("[%u] ", counter); fwprint(stdout, tmpfile->d_name, 1);
}
tmpfile = tmpfile->duplicates;
}
if (prompt) printf("\n");
/* preserve only the first file */
if (!prompt) {
preserve[1] = 1;
for (x = 2; x <= counter; x++) preserve[x] = 0;
} else do {
/* prompt for files to preserve */
printf("Set %u of %u: keep which files? (1 - %u, [a]ll, [n]one)",
curgroup, groups, counter);
if (ISFLAG(flags, F_SHOWSIZE)) printf(" (%ju byte%c each)", (uintmax_t)files->size,
(files->size != 1) ? 's' : ' ');
printf(": ");
fflush(stdout);
/* treat fgets() failure as if nothing was entered */
if (!fgets(preservestr, INPUT_SIZE, tty)) preservestr[0] = '\n';
i = strlen(preservestr) - 1;
/* tail of buffer must be a newline */
while (preservestr[i] != '\n') {
tstr = (char *)realloc(preservestr, strlen(preservestr) + 1 + INPUT_SIZE);
if (!tstr) oom("deletefiles() prompt string");
preservestr = tstr;
if (!fgets(preservestr + i + 1, INPUT_SIZE, tty))
{
preservestr[0] = '\n'; /* treat fgets() failure as if nothing was entered */
break;
}
i = strlen(preservestr) - 1;
}
for (x = 1; x <= counter; x++) preserve[x] = 0;
token = strtok(preservestr, " ,\n");
if (token != NULL && (*token == 'n' || *token == 'N')) goto preserve_none;
while (token != NULL) {
if (*token == 'a' || *token == 'A')
for (x = 0; x <= counter; x++) preserve[x] = 1;
number = 0;
sscanf(token, "%u", &number);
if (number > 0 && number <= counter) preserve[number] = 1;
token = strtok(NULL, " ,\n");
}
for (sum = 0, x = 1; x <= counter; x++) sum += preserve[x];
} while (sum < 1); /* save at least one file */
preserve_none:
printf("\n");
for (x = 1; x <= counter; x++) {
if (preserve[x]) {
printf(" [+] "); fwprint(stdout, dupelist[x]->d_name, 1);
} else {
#ifdef UNICODE
if (!M2W(dupelist[x]->d_name, wstr)) {
printf(" [!] "); fwprint(stdout, dupelist[x]->d_name, 0);
printf("-- MultiByteToWideChar failed\n");
continue;
}
#endif
if (file_has_changed(dupelist[x])) {
printf(" [!] "); fwprint(stdout, dupelist[x]->d_name, 0);
printf("-- file changed since being scanned\n");
#ifdef UNICODE
} else if (DeleteFile(wstr) != 0) {
#else
} else if (remove(dupelist[x]->d_name) == 0) {
#endif
printf(" [-] "); fwprint(stdout, dupelist[x]->d_name, 1);
} else {
printf(" [!] "); fwprint(stdout, dupelist[x]->d_name, 0);
printf("-- unable to delete file\n");
}
}
}
printf("\n");
}
}
free(dupelist);
free(preserve);
free(preservestr);
return;
}
/* jdupes action for deleting duplicate files */
#ifndef ACT_DELETEFILES_H
#define ACT_DELETEFILES_H
#ifdef __cplusplus
extern "C" {
#endif
#include "jdupes.h"
extern void deletefiles(file_t *files, int prompt, FILE *tty);
#ifdef __cplusplus
}
#endif
#endif /* ACT_DELETEFILES_H */
This diff is collapsed.
/* jdupes action for hard and soft file linking */
#ifndef ACT_LINKFILES_H
#define ACT_LINKFILES_H
#ifdef __cplusplus
extern "C" {
#endif
#include "jdupes.h"
extern void linkfiles(file_t *files, const int hard);
#ifdef __cplusplus
}
#endif
#endif /* ACT_LINKFILES_H */
#include <stdio.h>
#include "jdupes.h"
#include "jody_win_unicode.h"
#include "act_printmatches.h"
extern void printmatches(file_t * restrict files)
{
file_t * restrict tmpfile;
while (files != NULL) {
if (ISFLAG(files->flags, F_HAS_DUPES)) {
if (!ISFLAG(flags, F_OMITFIRST)) {
if (ISFLAG(flags, F_SHOWSIZE)) printf("%jd byte%c each:\n", (intmax_t)files->size,
(files->size != 1) ? 's' : ' ');
fwprint(stdout, files->d_name, 1);
}
tmpfile = files->duplicates;
while (tmpfile != NULL) {
fwprint(stdout, tmpfile->d_name, 1);
tmpfile = tmpfile->duplicates;
}
if (files->next != NULL) printf("\n");
}
files = files->next;
}
return;
}
/* jdupes action for printing matched file sets to stdout */
#ifndef ACT_PRINTMATCHES_H
#define ACT_PRINTMATCHES_H
#ifdef __cplusplus
extern "C" {
#endif
#include "jdupes.h"
extern void printmatches(file_t * restrict files);
#ifdef __cplusplus
}
#endif
#endif /* ACT_PRINTMATCHES_H */
/* Print summary of match statistics to stdout */
#include <stdio.h>
#include "jdupes.h"
#include "act_summarize.h"
extern void summarizematches(const file_t * restrict files)
{
unsigned int numsets = 0;
off_t numbytes = 0;
int numfiles = 0;
while (files != NULL) {
file_t *tmpfile;
if (ISFLAG(files->flags, F_HAS_DUPES)) {
numsets++;
tmpfile = files->duplicates;
while (tmpfile != NULL) {
numfiles++;
numbytes += files->size;
tmpfile = tmpfile->duplicates;
}
}
files = files->next;
}
if (numsets == 0)
printf("No duplicates found.\n");
else
{
printf("%d duplicate files (in %d sets), occupying ", numfiles, numsets);
if (numbytes < 1000) printf("%jd byte%c\n", (intmax_t)numbytes, (numbytes != 1) ? 's' : ' ');
else if (numbytes <= 1000000) printf("%jd KB\n", (intmax_t)(numbytes / 1000));
else printf("%jd MB\n", (intmax_t)(numbytes / 1000000));
}
return;
}
/* jdupes action for printing a summary of match stats to stdout */
#ifndef ACT_SUMMARIZE_H
#define ACT_SUMMARIZE_H
#ifdef __cplusplus
extern "C" {
#endif
#include "jdupes.h"
extern void summarizematches(const file_t * restrict files);
#ifdef __cplusplus
}
#endif
#endif /* ACT_SUMMARIZE_H */
...@@ -6,4 +6,6 @@ filesystem. However, this option uses the file linux/btrfs.h, not available ...@@ -6,4 +6,6 @@ filesystem. However, this option uses the file linux/btrfs.h, not available
for hurd-i386 and kfreebsd-*. So, jdupes has support to btrfs in all Linux for hurd-i386 and kfreebsd-*. So, jdupes has support to btrfs in all Linux
based architectures only. based architectures only.
-- Joao Eriberto Mota Filho <eriberto@debian.org> Mon, 02 Jan 2016 23:38:04 -0200 To see if jdupes build is supporting btrds, use 'jdupes -v' command.
-- Joao Eriberto Mota Filho <eriberto@debian.org> Mon, 03 Jan 2016 18:01:04 -0200
jdupes (1.7-1) unstable; urgency=medium
* New upstream release.
* Upload to unstable. See the previous changelog for details.
(Closes: #848360)
* debian/patches/10_use-program-name-variable.patch: dropped. The upstream
fixed the source code. Thanks!
* debian/README.Debian: added a new information.
-- Joao Eriberto Mota Filho <eriberto@debian.org> Tue, 03 Jan 2017 17:30:04 -0200
jdupes (1.6.2-4) experimental; urgency=medium jdupes (1.6.2-4) experimental; urgency=medium
* Using a single package again, as suggested by Christoph Anton Mitterer * Using a single package again, as suggested by Christoph Anton Mitterer
......
Description: use PROGRAM_NAME variable instead of jdupes.
Author: Joao Eriberto Mota Filho <eriberto@debian.org>
Last-Update: 2016-12-14
Index: jdupes-1.6.2/Makefile
===================================================================
--- jdupes-1.6.2.orig/Makefile
+++ jdupes-1.6.2/Makefile
@@ -103,15 +103,15 @@ OBJECT_FILES += jdupes.o jody_hash.o jod
all: jdupes
jdupes: $(OBJECT_FILES)
- $(CC) $(CFLAGS) $(LDFLAGS) -o jdupes $(OBJECT_FILES)
+ $(CC) $(CFLAGS) $(LDFLAGS) -o $(PROGRAM_NAME) $(OBJECT_FILES)
installdirs:
test -d $(DESTDIR)$(BIN_DIR) || $(MKDIR) $(DESTDIR)$(BIN_DIR)
test -d $(DESTDIR)$(MAN_DIR) || $(MKDIR) $(DESTDIR)$(MAN_DIR)
install: jdupes installdirs
- $(INSTALL_PROGRAM) jdupes $(DESTDIR)$(BIN_DIR)/$(PROGRAM_NAME)
- $(INSTALL_DATA) jdupes.1 $(DESTDIR)$(MAN_DIR)/$(PROGRAM_NAME).$(MAN_EXT)
+ $(INSTALL_PROGRAM) $(PROGRAM_NAME) $(DESTDIR)$(BIN_DIR)/$(PROGRAM_NAME)
+ $(INSTALL_DATA) $(PROGRAM_NAME).1 $(DESTDIR)$(MAN_DIR)/$(PROGRAM_NAME).$(MAN_EXT)
clean:
- $(RM) $(OBJECT_FILES) jdupes jdupes.exe *~ *.gcno *.gcda *.gcov
+ $(RM) $(OBJECT_FILES) $(PROGRAM_NAME) jdupes.exe *~ *.gcno *.gcda *.gcov
10_use-program-name-variable.patch
...@@ -64,7 +64,8 @@ when used together with \-\-delete, preserve the first file in each set of ...@@ -64,7 +64,8 @@ when used together with \-\-delete, preserve the first file in each set of
duplicates and delete the others without prompting the user duplicates and delete the others without prompting the user
.TP .TP
.B -n --noempty .B -n --noempty
exclude zero-length files from consideration exclude zero-length files from consideration; this option is the default
behavior and does nothing (also see \fB\-z/--zeromatch\fP)
.TP .TP
.B -O --paramorder .B -O --paramorder
parameter order preservation is more important than the chosen sort; this parameter order preservation is more important than the chosen sort; this
...@@ -95,6 +96,10 @@ Examples section below for further explanation) ...@@ -95,6 +96,10 @@ Examples section below for further explanation)
.B -r --recurse .B -r --recurse
for every directory given follow subdirectories encountered within for every directory given follow subdirectories encountered within
.TP .TP
.B -s --linksoft
replace all duplicate files with symlinks to the first file in each set
of duplicates
.TP
.B -S --size .B -S --size
show size of duplicate files show size of duplicate files
.TP .TP
...@@ -119,6 +124,10 @@ for megabytes (units of 1024 x 1024 bytes) ...@@ -119,6 +124,10 @@ for megabytes (units of 1024 x 1024 bytes)
for gigabytes (units of 1024 x 1024 x 1024 bytes) for gigabytes (units of 1024 x 1024 x 1024 bytes)