Skip to content

Commit

Permalink
Cope with duplicate files creating archives, fixes #33
Browse files Browse the repository at this point in the history
  • Loading branch information
brunchboy committed Jun 18, 2024
1 parent a34f75d commit 8bd31a4
Show file tree
Hide file tree
Showing 6 changed files with 277 additions and 290 deletions.
5 changes: 2 additions & 3 deletions doc/modules/ROOT/pages/anlz.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -558,7 +558,7 @@ color preview data begins at byte{nbsp}``18`` and is 7,200 (decimal)
bytes long, representing 1,200 columns of waveform preview
information.

The color waveform preview entries are the most complex of any of the
The color waveform preview entries are the most complex of the
waveform tags. See the
<<djl-analysis:ROOT:track_metadata.adoc#color-preview-analysis,protocol
analysis document>> for the details.
Expand Down Expand Up @@ -659,8 +659,7 @@ include::example$tag_shared.edn[]
----

__len_entry_bytes__ identifies how many bytes each phrase entry takes
up; so far it always has the value `18`, so each entry takes twenty
four bytes. __len_entries__ at bytes{nbsp}``10``-`11` (labeled
up; so far it always has the value `18`, so each entry takes twenty-four bytes. __len_entries__ at bytes{nbsp}``10``-`11` (labeled
_len~e~_ in the diagram) specifies how many entries are present in the
tag. Each entry represents one recognized phrase.

Expand Down
8 changes: 4 additions & 4 deletions doc/modules/ROOT/pages/exports.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ be mounted in DJ controllers and used to play and mix music.
The file consists of a series of fixed size pages. The first page
contains a file header which defines the page size and the locations
of database tables of different types, by the index of their first
page. The rest of the pages consist of the data pages for all of the
page. The rest of the pages consist of the data pages for all the
tables identified in the header.

Each table is made up of a series of rows which may be spread across
Expand All @@ -63,7 +63,7 @@ Lesniak], to whom I am hugely grateful.
[[file-header]]
=== File Header

Unless otherwise stated, all multi-byte numbers in the file are stored
Unless otherwise stated, all multibyte numbers in the file are stored
in little-endian byte order. Field names used in the byte field
diagrams match the IDs assigned to them in the
https://github.com/Deep-Symmetry/crate-digger/blob/master/src/main/kaitai/rekordbox_pdb.ksy[Kaitai
Expand Down Expand Up @@ -304,7 +304,7 @@ _row~pf0~_ in the diagram (meaning “row presence flags group 0”), is
found near the end of the page. The last two bytes after each row
bitmask (for example _pad~0~_ after _row~pf0~_) have an unknown
purpose and may always be zero, and the _row~pf0~_ bitmask takes up
the two bytes that precede them. The low order bit of this value will
the two bytes that precede them. The low-order bit of this value will
be set if row 0 is really present, the next bit if row 1 is really
present, and so on. The two bytes before these flags, labeled
_ofs~0~_, store the offset of the first row in the page. This offset
Expand All @@ -315,7 +315,7 @@ the heap, at byte `28` in the page, _ofs~0~_ would have the value

As more rows are added to the page, space is allocated for them in the
heap, and additional index entries are added at the end of the heap,
growing backwards. Once there have been sixteen rows added, all of the
growing backwards. Once there have been sixteen rows added, all the
bits in _row~pf0~_ are accounted for, and when another row is added,
before its offset entry _ofs~16~_ can be added, another row bit-mask
entry _row~pf1~_ needs to be allocated, followed by its corresponding
Expand Down
7 changes: 7 additions & 0 deletions pom.xml
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,13 @@
<version>1.1.4</version>
</dependency>

<!-- Annotations to document API entry points -->
<dependency>
<groupId>org.apiguardian</groupId>
<artifactId>apiguardian-api</artifactId>
<version>1.1.2</version>
</dependency>

<!-- Simple Logging Facade for Java -->
<dependency>
<groupId>org.slf4j</groupId>
Expand Down
63 changes: 37 additions & 26 deletions src/main/java/org/deepsymmetry/cratedigger/Archivist.java
Original file line number Diff line number Diff line change
@@ -1,24 +1,27 @@
package org.deepsymmetry.cratedigger;

import org.apiguardian.api.API;
import org.deepsymmetry.cratedigger.pdb.RekordboxPdb;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.File;
import java.io.IOException;
import java.net.URI;
import java.net.URISyntaxException;
import java.nio.file.FileSystem;
import java.nio.file.FileSystems;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.*;
import java.util.Iterator;
import java.util.Map;

/**
* Supports the creation of archives of all the metadata needed from rekordbox media exports to enable full Beat Link
* features when working with the Opus Quad, which is unable to serve the metadata itself.
*/
@API(status = API.Status.EXPERIMENTAL)
public class Archivist {

private static final Logger logger = LoggerFactory.getLogger(Archivist.class);

/**
* Holds the singleton instance of this class.
*/
Expand All @@ -29,6 +32,7 @@ public class Archivist {
*
* @return the only instance that exists
*/
@API(status = API.Status.EXPERIMENTAL)
public static Archivist getInstance() {
return instance;
}
Expand All @@ -44,6 +48,7 @@ private Archivist() {
* An interface that can be used to display progress to the user as an archive is being created, and allow
* them to cancel the process if desired.
*/
@API(status = API.Status.EXPERIMENTAL)
public interface ArchiveListener {

/**
Expand All @@ -68,6 +73,7 @@ public interface ArchiveListener {
*
* @throws IOException if there is a problem creating the archive
*/
@API(status = API.Status.EXPERIMENTAL)
public void createArchive(Database database, File file) throws IOException {
createArchive(database, file, null);
}
Expand All @@ -84,6 +90,7 @@ public void createArchive(Database database, File file) throws IOException {
*
* @throws IOException if there is a problem creating the archive
*/
@API(status = API.Status.EXPERIMENTAL)
public void createArchive(Database database, File archiveFile, ArchiveListener listener) throws IOException {
final Path archivePath = archiveFile.toPath();
final Path mediaPath = database.sourceFile.getParentFile().getParentFile().getParentFile().toPath();
Expand All @@ -107,38 +114,21 @@ public void createArchive(Database database, File archiveFile, ArchiveListener l

// First the original analysis file.
final String anlzPathString = Database.getText(track.analyzePath());
final Path anlzPath = mediaPath.resolve(anlzPathString.substring(1));
Path destPath = fileSystem.getPath(anlzPathString);
Files.createDirectories(destPath.getParent());
Files.copy(anlzPath, destPath);
archiveMediaItem(mediaPath, anlzPathString, fileSystem, "analysis file");

// Then the extended analysis file, if it exists.
final String extPathString = anlzPathString.substring(0, anlzPathString.length() - 3) + "EXT";
final Path extPath = mediaPath.resolve(extPathString.substring(1));
if (extPath.toFile().canRead()) {
destPath = fileSystem.getPath(extPathString);
Files.copy(extPath, destPath);
}
archiveMediaItem(mediaPath, extPathString, fileSystem, "extended analysis file");

// Finally, the album art.
final RekordboxPdb.ArtworkRow artwork = database.artworkIndex.get(track.artworkId());
if (artwork != null) {
final String artPathString = Database.getText(artwork.path());
final Path artPath = mediaPath.resolve(artPathString.substring(1));
// First copy the regular resolution album art
if (artPath.toFile().canRead()) {
destPath = fileSystem.getPath(artPathString);
Files.createDirectories(destPath.getParent());
Files.copy(artPath, destPath);
}
archiveMediaItem(mediaPath, artPathString, fileSystem, "artwork file");

// Then, copy the high resolution album art, if it exists
final String highResArtPathString = artPathString.replaceFirst("(\\.\\w+$)", "_m$1");
final Path highResArtPath = mediaPath.resolve(highResArtPathString.substring(1));
if (highResArtPath.toFile().canRead()) {
destPath = fileSystem.getPath(highResArtPathString);
Files.createDirectories(destPath.getParent());
Files.copy(highResArtPath, destPath);
}
archiveMediaItem(mediaPath, highResArtPathString, fileSystem, "high-resolution artwork file");
}

++completed; // For use in providing progress feedback if there is a listener.
Expand All @@ -160,4 +150,25 @@ public void createArchive(Database database, File archiveFile, ArchiveListener l
}
}
}

/**
* Helper method to archive a single media export file when creating a metadata archive.
*
* @param mediaPath the path to the file to be archived
* @param pathString the string which holds the absolute path to the media item
* @param archive the ZIP filesystem in which the metadata archive is being created
* @param description the text identifying the type of file being archived, in case we need to log a warning
*
* @throws IOException if there is an unexpected problem adding the media item to the archive
*/
private static void archiveMediaItem(Path mediaPath, String pathString, FileSystem archive, String description) throws IOException {
final Path sourcePath = mediaPath.resolve(pathString.substring(1));
final Path destinationPath = archive.getPath(pathString);
Files.createDirectories(destinationPath.getParent());
try {
Files.copy(sourcePath, destinationPath);
} catch (FileAlreadyExistsException e) {
logger.warn("Skipping copy of {} {} because it has already been archived." , description, destinationPath);
}
}
}
Loading

0 comments on commit 8bd31a4

Please sign in to comment.