Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add: series migration to be unique #3417

Merged
merged 15 commits into from
Oct 12, 2024
Merged

Conversation

nichwall
Copy link
Contributor

This PR resolves #3207 and is a continuation of #3326.

A migration script is added for 2.13.5, so it will work for both a minor and patch release (not sure what the plans are for the next release). I added some unit tests for this migration, but also tested it manually before writing the unit tests. I also updated the Series model to have an index for series name and libraryId, because series can have the same name in different libraries.

I am not sure if I did the QueryInterface parts of the migration script correctly, so if y'all can check it that would be great. Basically, I am using raw SQL instead of Sequelize, so I'm not sure if this is the correct way to do things.

@nichwall
Copy link
Contributor Author

Well, looks like I missed something. I need to step away for a bit but can try reviewing the error later tonight.

@nichwall
Copy link
Contributor Author

Oh, right. It may be failing because it depends on #3416? The copy to /config/migrations is failing

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

This might be a separate issue. I see the integration test is failing as well due to some pkg runtime failure. I'll investigate.

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

OK, try patching the change I made in 21c77dc. This will fix the integration test failure.

Comment on lines 58 to 77
// Update all BookSeries records for this series to point to the most recent series
const [seriesUpdated] = await queryInterface.sequelize.query(
`
UPDATE BookSeries
SET seriesId = :mostRecentSeriesId
WHERE seriesId IN (
SELECT id
FROM Series
WHERE name = :name AND libraryId = :libraryId
AND id != :mostRecentSeriesId
)
`,
{
replacements: {
name: duplicate.name,
libraryId: duplicate.libraryId,
mostRecentSeriesId: mostRecentSeries.id
}
}
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe there's an issue here that the test doesn't uncover because the bookSeries table you create there is not set-up correctly.

If you examine an actual ABS database, you will see that the bookSeries table has a unique contraint on [bookId, seriesId].

This means that if some book with id book1 points to two series duplicates with ids seriesDup1 and seriesDup2, then before this query there are two bookSeries records [book1, seriesDup1] and [book1, seriesDup2], but after this query there will be [book1, seriesDup2] and [book1, seriesDup2], which violates the unique constraint and will throw an exception.

I don't know if such a situation occurs in user databases, but we have to assume it does.

Your test won't uncover this (even if it has the right test) since it creates the bookSeries table without the unique constraint.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not seeing the unique constraint on [bookId, seriesId]

CREATE TABLE bookSeries (
    id        UUID,
    sequence  VARCHAR (255),
    bookId    UUID,
    seriesId  UUID,
    createdAt DATETIME,
    CONSTRAINT BOOKSERIES_PK PRIMARY KEY (
        id
    ),
    CONSTRAINT FK_bookSeries_books FOREIGN KEY (
        bookId
    )
    REFERENCES books (id) ON DELETE CASCADE
                          ON UPDATE CASCADE,
    CONSTRAINT FK_bookSeries_series_2 FOREIGN KEY (
        seriesId
    )
    REFERENCES series (id) ON DELETE CASCADE
                           ON UPDATE CASCADE
);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's what I was basing my tables off of. If you look at the actual SQLite database, you can see the table is created as below (the unique constraint at the very end of the line)

CREATE TABLE `bookSeries` (`id` UUID PRIMARY KEY, `sequence` VARCHAR(255), `bookId` UUID REFERENCES `books` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, `seriesId` UUID REFERENCES `series` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, `createdAt` DATETIME, UNIQUE (`bookId`, `seriesId`))

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@advplyr where is the statement above coming from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably here?

static init(sequelize) {
super.init(
{
id: {
type: DataTypes.UUID,
defaultValue: DataTypes.UUIDV4,
primaryKey: true
},
sequence: DataTypes.STRING
},
{
sequelize,
modelName: 'bookSeries',
timestamps: true,
updatedAt: false
}
)
// Super Many-to-Many
// ref: https://sequelize.org/docs/v6/advanced-association-concepts/advanced-many-to-many/#the-best-of-both-worlds-the-super-many-to-many-relationship
const { book, series } = sequelize.models
book.belongsToMany(series, { through: BookSeries })
series.belongsToMany(book, { through: BookSeries })
book.hasMany(BookSeries, {
onDelete: 'CASCADE'
})
BookSeries.belongsTo(book)
series.hasMany(BookSeries, {
onDelete: 'CASCADE'
})
BookSeries.belongsTo(series)
}
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, that in itself indicates that databases started at different points in time are not consistent (sigh)...
This requires another migration...

Copy link
Contributor Author

@nichwall nichwall Sep 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the migration manager support multiple migration scripts for the same release? Or, do I need to change something else in this migration script? I'm a little confused.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, mutliple migrations scripts in the same version aren't supported.

To be clear, I am not requiring any additional changes in your migration. I was just saying that fact that we have the existing codebase running databases with different schemas is worrying, but wasn't suggesting that we do anything immediate about it.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how that could have happened since that relationship has always been setup like that since the first release of that table. It could be that I'm the only one with that difference and it happened during the initial development.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, thanks!

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

A couple of ideas regarding setting up the unit test tables correctly:

  1. If you use SQLiteStudio to inspect an existing ABS database, you can look at the DDL tab of a table to see how it was created.
  2. Another option would be to start with an initial new database created by ABS (without any libraries) before the migration, and work off of that. You can put that database in a testdata subdirectory of your unit test, and then before each test copy it to a temp directory, load it, and manipulate it to your liking. An empty ABS database is only about 300kb in size.

@nichwall
Copy link
Contributor Author

nichwall commented Sep 14, 2024

Thanks, I'll update my tests to include the rest of the indexes/constraints for the two tables. Is how I'm doing the beforeEach and afterEach a good way do the initial setup?

I like the idea of working off of the database created by the server, but I think that will not work as well for migrations. For this migration, I also updated the model in server/models/Series to have the same unique index in the table, so the migration tests would need to be created from an old server version. (Edited paragraph, hit enter without finishing)

I would love a test database created by the server that is copied and then used by tests as a way to automate API tests or other e2e tests, though. I had tried playing with that shortly after you added the test framework, but never figured it out.

@nichwall
Copy link
Contributor Author

Oh, I see what you mean, you're saying have several test databases made beforehand (possibly with some test data)?

@advplyr
Copy link
Owner

advplyr commented Sep 14, 2024

2. Another option would be to start with an initial new database created by ABS (without any libraries) before the migration, and work off of that. You can put that database in a testdata subdirectory of your unit test, and then before each test copy it to a temp directory, load it, and manipulate it to your liking. An empty ABS database is only about 300kb in size.

This seems like a good idea but I'm not as familiar with building out unit tests like this. That size is negligible

@nichwall
Copy link
Contributor Author

In response to mikiher's comments about missing the unique constraint for bookId and seriesId, how do we want to handle combining the book into the same series (with different sequence values)?

Should I sort the sequence column and concatenate them using ", "? I think that is better than just deleting the extra data because the sequence can still be a string, but this may cause migration problems later if that column is ever changed to be numeric. Though, that will be a can of worms with people using custom strings there anyway.

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

Oh, I see what you mean, you're saying have several test databases made beforehand (possibly with some test data)?

No, not necessarily several database, just one for each migration script you write, that is either empty or contains a small amount of pre-migration dummy data (that was added using ABS). Once you have such a database you:

  • Put it in the test directory, e.g. test/server/migration/testdata/absdatabase_v2.13.5-series-column-unique.sqlite
  • Then, before each test:
    • Copy the database file to a temp location, e.g. /tmp/tmp.sqlite
    • Instantiate a Sequelize instance with the tmp database
    • Manipulate it as you like
  • Inside each test
    • Manipulate it some more (if needed)
    • Run the migration
    • Assert what needs to be asserted
  • After each test
    • Delete the tmp database

These tests will (maybe) run a little slower than standard tests because of all the file system operations, but I don't think this will be a huge overhead.

If you really care about test performance, we can potentially optimize this: instead of using a file-based temp database, you can:

  • Before each test:
    • Instantiate an in-memory Sequelize instance
    • Use ATTACH DATABASE to attach testdata/absdatabase_v2.13.5-series-column-unique.sqlite to the connection in read-only mode
    • Read all the tables and their data from the attached database into the main (in-memory) database
    • Detach the file database
  • Inside each test
    • As before, but now working with the in-memory instance
  • After each test
    • Close and discard the in-memory instance.

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

In response to mikiher's comments about missing the unique constraint for bookId and seriesId, how do we want to handle combining the book into the same series (with different sequence values)?

Should I sort the sequence column and concatenate them using ", "? I think that is better than just deleting the extra data because the sequence can still be a string, but this may cause migration problems later if that column is ever changed to be numeric. Though, that will be a can of worms with people using custom strings there anyway.

IMO you should not concatenate - the squence column in bookSeries is not meant to have multiple values. You should have a (simple) algorithm that decides which one to keep, just like with the series table.

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

Regarding testing with a snapshot of an ABS database, there's even an easier way: you can dump a database as a list of SQL statements. So for example, I just dumped an empty ABS database export.sql using SQLiteStudio (or you can do it with sqlite3 CLI). It contains all the sql queries needed to bring the database into its current state (including table creation, value insertion, index creation, etc.)

The dump weighs just about 13kb.

You can put it in testdata, load it as a string, and parse it into an array of statements sqlStatements

  • Before each test:
    • Instantiate a new in-memory sequelize instance
    • Run for (const statement of sqlStatements) await sequelize.query(statement)

And voila - you have an in-memory database identical to the original.

Again, if the database is large, this can be expensive, but for small empty/near-empty databases, this will likely take very little time. And of course the added value is that it's very simple to implement.

@mikiher
Copy link
Contributor

mikiher commented Sep 14, 2024

I like the idea of working off of the database created by the server, but I think that will not work as well for migrations. For this migration, I also updated the model in server/models/Series to have the same unique index in the table, so the migration tests would need to be created from an old server version. (Edited paragraph, hit enter without finishing)

Of course the test database would need to be copied/created from a pre-migration database (that's the whole point running the tests, isn't it?). I'm not sure I see why that would be an issue, though.

@nichwall
Copy link
Contributor Author

Of course the test database would need to be copied/created from a pre-migration database (that's the whole point running the tests, isn't it?). I'm not sure I see why that would be an issue, though.

Yes, thanks for clarifying. I realized what you meant after I sent that message.

@nichwall
Copy link
Contributor Author

I have added 3 new test cases to address your initial comments about the same book being able to exist in multiple series. Before removing duplicate series in the bookSeries table, I am first checking and remove any series that share the series name and library item ID, sorting by the sequence number so the "first book in the series" is kept.

Should I change how the database is initialized in the tests to be the in memory from an old ABS database before this PR is merged?

Thanks for your patience while I figure this out.

@mikiher
Copy link
Contributor

mikiher commented Sep 15, 2024

@nichwall I'm still looking at it. I didn't have much time today, will continue tomorrow.

Copy link
Contributor

@mikiher mikiher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think an SQL programmer would not have written it like this (they would use a fixed number of bulk update/delete queries with inner joins instead of all these loops), but I don't see any additional logic issues.

Other than asking that you test it on a few actual large databases (including ones that have duplicate series), and fixing the little nitpicks above, I approve.

server/migrations/v2.13.5-series-column-unique.js Outdated Show resolved Hide resolved
Comment on lines 58 to 77
// Update all BookSeries records for this series to point to the most recent series
const [seriesUpdated] = await queryInterface.sequelize.query(
`
UPDATE BookSeries
SET seriesId = :mostRecentSeriesId
WHERE seriesId IN (
SELECT id
FROM Series
WHERE name = :name AND libraryId = :libraryId
AND id != :mostRecentSeriesId
)
`,
{
replacements: {
name: duplicate.name,
libraryId: duplicate.libraryId,
mostRecentSeriesId: mostRecentSeries.id
}
}
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very odd. This is what I get in my DDL tab for bookSeries:

CREATE TABLE bookSeries (
    id        UUID          PRIMARY KEY,
    sequence  VARCHAR (255),
    createdAt DATETIME      NOT NULL,
    bookId    UUID          REFERENCES books (id) ON DELETE CASCADE
                                                  ON UPDATE CASCADE,
    seriesId  UUID          REFERENCES series (id) ON DELETE CASCADE
                                                   ON UPDATE CASCADE,
    UNIQUE (
        bookId,
        seriesId
    )
);

Perhaps your database is very old, from before the implicit constraints were introduced? The SQL code looks quite different. In any case, as I mnetioned above, the documentation clearly says that the table is implicitly created with the composite unique constraint becasue of the many-to-many relation betweeb books and series (bookAuthors is similar in that regard).

server/migrations/v2.13.5-series-column-unique.js Outdated Show resolved Hide resolved
server/migrations/v2.13.5-series-column-unique.js Outdated Show resolved Hide resolved
@nichwall
Copy link
Contributor Author

Thanks for taking the time to review. I'll try and get those comments taken care of this evening. And you're correct, I am not a SQL person, but I figure an inefficient loop will be good enough for a one-time migration.

@advplyr
Copy link
Owner

advplyr commented Sep 24, 2024

This is working great in my testing.

I came across one unrelated issue when testing the migration failing. I had the db file open in DBeaver so the file was locked when the logic to restore backup was happening. That was throwing an exception inside the catch block when trying to move the file. I'm not sure if anything can be done there but I wanted to point that out incase either of you had thoughts.

Thanks for all the effort put into this first migration. I'm not sure yet if this should be merged before v2.14.0 is ready. Sequelize will create the index automatically without a migration and the migration is idempotent so it won't break anything if it is merged early.

@nichwall
Copy link
Contributor Author

In response to @mikiher's comment about database migrations from previous versions, I checked out every release from server version 2.3.0 through 2.13.4 (latest at the time of writing), started the server, dumped the database to an SQL file, and then moved on to the next server version. I then ran a diff on all of the databases newly created to see when changes were implemented. Note that this does not include migrations from old databases, this is just how ABS created the database in each of these server versions.

There were 7 files with differences, summarized below.

It took a bit because work and life got really busy this past week.

Source file schema_v2.3.3.sql to schema_v2.3.4.sql had differences This is the migration which stopped loading everything from the database into memory.
4,5c4,7
< CREATE TABLE `books` (`id` UUID PRIMARY KEY, `title` VARCHAR(255), `subtitle` VARCHAR(255), `publishedYear` VARCHAR(255), `publishedDate` VARCHAR(255), `publisher` VARCHAR(255), `description` TEXT, `isbn` VARCHAR(255), `asin` VARCHAR(255), `language` VARCHAR(255), `explicit` TINYINT(1), `abridged` TINYINT(1), `coverPath` VARCHAR(255), `narrators` JSON, `audioFiles` JSON, `ebookFile` JSON, `chapters` JSON, `tags` JSON, `genres` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
< CREATE TABLE `podcasts` (`id` UUID PRIMARY KEY, `title` VARCHAR(255), `author` VARCHAR(255), `releaseDate` VARCHAR(255), `feedURL` VARCHAR(255), `imageURL` VARCHAR(255), `description` TEXT, `itunesPageURL` VARCHAR(255), `itunesId` VARCHAR(255), `itunesArtistId` VARCHAR(255), `language` VARCHAR(255), `podcastType` VARCHAR(255), `explicit` TINYINT(1), `autoDownloadEpisodes` TINYINT(1), `autoDownloadSchedule` VARCHAR(255), `lastEpisodeCheck` DATETIME, `maxEpisodesToKeep` INTEGER, `maxNewEpisodesToDownload` INTEGER, `coverPath` VARCHAR(255), `tags` JSON, `genres` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
---
> CREATE TABLE `books` (`id` UUID PRIMARY KEY, `title` VARCHAR(255), `titleIgnorePrefix` VARCHAR(255), `subtitle` VARCHAR(255), `publishedYear` VARCHAR(255), `publishedDate` VARCHAR(255), `publisher` VARCHAR(255), `description` TEXT, `isbn` VARCHAR(255), `asin` VARCHAR(255), `language` VARCHAR(255), `explicit` TINYINT(1), `abridged` TINYINT(1), `coverPath` VARCHAR(255), `duration` FLOAT, `narrators` JSON, `audioFiles` JSON, `ebookFile` JSON, `chapters` JSON, `tags` JSON, `genres` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
> CREATE INDEX `books_title` ON `books` (`title` COLLATE `NOCASE`);
> CREATE INDEX `books_published_year` ON `books` (`publishedYear`);
> CREATE TABLE `podcasts` (`id` UUID PRIMARY KEY, `title` VARCHAR(255), `titleIgnorePrefix` VARCHAR(255), `author` VARCHAR(255), `releaseDate` VARCHAR(255), `feedURL` VARCHAR(255), `imageURL` VARCHAR(255), `description` TEXT, `itunesPageURL` VARCHAR(255), `itunesId` VARCHAR(255), `itunesArtistId` VARCHAR(255), `language` VARCHAR(255), `podcastType` VARCHAR(255), `explicit` TINYINT(1), `autoDownloadEpisodes` TINYINT(1), `autoDownloadSchedule` VARCHAR(255), `lastEpisodeCheck` DATETIME, `maxEpisodesToKeep` INTEGER, `maxNewEpisodesToDownload` INTEGER, `coverPath` VARCHAR(255), `tags` JSON, `genres` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
7c9,14
< CREATE TABLE `libraryItems` (`id` UUID PRIMARY KEY, `ino` VARCHAR(255), `path` VARCHAR(255), `relPath` VARCHAR(255), `mediaId` UUIDV4, `mediaType` VARCHAR(255), `isFile` TINYINT(1), `isMissing` TINYINT(1), `isInvalid` TINYINT(1), `mtime` DATETIME, `ctime` DATETIME, `birthtime` DATETIME, `lastScan` DATETIME, `lastScanVersion` VARCHAR(255), `libraryFiles` JSON, `extraData` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE SET NULL ON UPDATE CASCADE, `libraryFolderId` UUID REFERENCES `libraryFolders` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
---
> CREATE TABLE `libraryItems` (`id` UUID PRIMARY KEY, `ino` VARCHAR(255), `path` VARCHAR(255), `relPath` VARCHAR(255), `mediaId` UUIDV4, `mediaType` VARCHAR(255), `isFile` TINYINT(1), `isMissing` TINYINT(1), `isInvalid` TINYINT(1), `mtime` DATETIME, `ctime` DATETIME, `birthtime` DATETIME, `size` BIGINT, `lastScan` DATETIME, `lastScanVersion` VARCHAR(255), `libraryFiles` JSON, `extraData` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE SET NULL ON UPDATE CASCADE, `libraryFolderId` UUID REFERENCES `libraryFolders` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
> CREATE INDEX `library_items_created_at` ON `libraryItems` (`createdAt`);
> CREATE INDEX `library_items_media_id` ON `libraryItems` (`mediaId`);
> CREATE INDEX `library_items_library_id_media_type` ON `libraryItems` (`libraryId`, `mediaType`);
> CREATE INDEX `library_items_birthtime` ON `libraryItems` (`birthtime`);
> CREATE INDEX `library_items_mtime` ON `libraryItems` (`mtime`);
9,12c16,24
< CREATE TABLE `series` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `description` TEXT, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
< CREATE TABLE `bookSeries` (`id` UUID PRIMARY KEY, `sequence` VARCHAR(255), `bookId` UUID REFERENCES `books` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, `seriesId` UUID REFERENCES `series` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, UNIQUE (`bookId`, `seriesId`));
< CREATE TABLE `authors` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `asin` VARCHAR(255), `description` TEXT, `imagePath` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
< CREATE TABLE `bookAuthors` (`id` UUID PRIMARY KEY, `bookId` UUID REFERENCES `books` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, `authorId` UUID REFERENCES `authors` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, UNIQUE (`bookId`, `authorId`));
---
> CREATE INDEX `media_progresses_updated_at` ON `mediaProgresses` (`updatedAt`);
> CREATE TABLE `series` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `nameIgnorePrefix` VARCHAR(255), `description` TEXT, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
> CREATE INDEX `series_name` ON `series` (`name` COLLATE `NOCASE`);
> CREATE INDEX `series_library_id` ON `series` (`libraryId`);
> CREATE TABLE `bookSeries` (`id` UUID PRIMARY KEY, `sequence` VARCHAR(255), `createdAt` DATETIME NOT NULL, `bookId` UUID REFERENCES `books` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, `seriesId` UUID REFERENCES `series` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, UNIQUE (`bookId`, `seriesId`));
> CREATE TABLE `authors` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `lastFirst` VARCHAR(255), `asin` VARCHAR(255), `description` TEXT, `imagePath` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
> CREATE INDEX `authors_name` ON `authors` (`name` COLLATE `NOCASE`);
> CREATE INDEX `authors_library_id` ON `authors` (`libraryId`);
> CREATE TABLE `bookAuthors` (`id` UUID PRIMARY KEY, `createdAt` DATETIME NOT NULL, `bookId` UUID REFERENCES `books` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, `authorId` UUID REFERENCES `authors` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, UNIQUE (`bookId`, `authorId`));
15c27
< CREATE TABLE `playlists` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `description` TEXT, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE SET NULL ON UPDATE CASCADE, `userId` UUID REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
---
> CREATE TABLE `playlists` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `description` TEXT, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `libraryId` UUID REFERENCES `libraries` (`id`) ON DELETE SET NULL ON UPDATE CASCADE, `userId` UUID REFERENCES `users` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
19c31
< CREATE TABLE `feeds` (`id` UUID PRIMARY KEY, `slug` VARCHAR(255), `entityType` VARCHAR(255), `entityId` UUIDV4, `entityUpdatedAt` DATETIME, `serverAddress` VARCHAR(255), `feedURL` VARCHAR(255), `imageURL` VARCHAR(255), `siteURL` VARCHAR(255), `title` VARCHAR(255), `description` TEXT, `author` VARCHAR(255), `podcastType` VARCHAR(255), `language` VARCHAR(255), `ownerName` VARCHAR(255), `ownerEmail` VARCHAR(255), `explicit` TINYINT(1), `preventIndexing` TINYINT(1), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `userId` UUID REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
---
> CREATE TABLE `feeds` (`id` UUID PRIMARY KEY, `slug` VARCHAR(255), `entityType` VARCHAR(255), `entityId` UUIDV4, `entityUpdatedAt` DATETIME, `serverAddress` VARCHAR(255), `feedURL` VARCHAR(255), `imageURL` VARCHAR(255), `siteURL` VARCHAR(255), `title` VARCHAR(255), `description` TEXT, `author` VARCHAR(255), `podcastType` VARCHAR(255), `language` VARCHAR(255), `ownerName` VARCHAR(255), `ownerEmail` VARCHAR(255), `explicit` TINYINT(1), `preventIndexing` TINYINT(1), `coverPath` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `userId` UUID REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
Source file schema_v2.4.0.sql to schema_v2.4.1.sql had differences
31,33d30
< CREATE TABLE `feeds` (`id` UUID PRIMARY KEY, `slug` VARCHAR(255), `entityType` VARCHAR(255), `entityId` UUIDV4, `entityUpdatedAt` DATETIME, `serverAddress` VARCHAR(255), `feedURL` VARCHAR(255), `imageURL` VARCHAR(255), `siteURL` VARCHAR(255), `title` VARCHAR(255), `description` TEXT, `author` VARCHAR(255), `podcastType` VARCHAR(255), `language` VARCHAR(255), `ownerName` VARCHAR(255), `ownerEmail` VARCHAR(255), `explicit` TINYINT(1), `preventIndexing` TINYINT(1), `coverPath` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `userId` UUID REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
< CREATE TABLE `feedEpisodes` (`id` UUID PRIMARY KEY, `title` VARCHAR(255), `author` VARCHAR(255), `description` TEXT, `siteURL` VARCHAR(255), `enclosureURL` VARCHAR(255), `enclosureType` VARCHAR(255), `enclosureSize` BIGINT, `pubDate` VARCHAR(255), `season` VARCHAR(255), `episode` VARCHAR(255), `episodeType` VARCHAR(255), `duration` FLOAT, `filePath` VARCHAR(255), `explicit` TINYINT(1), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `feedId` UUID REFERENCES `feeds` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
< CREATE TABLE `settings` (`key` VARCHAR(255) PRIMARY KEY, `value` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
Source file schema_v2.4.1.sql to schema_v2.4.2.sql had differences
30a31,33
> CREATE TABLE `feeds` (`id` UUID PRIMARY KEY, `slug` VARCHAR(255), `entityType` VARCHAR(255), `entityId` UUIDV4, `entityUpdatedAt` DATETIME, `serverAddress` VARCHAR(255), `feedURL` VARCHAR(255), `imageURL` VARCHAR(255), `siteURL` VARCHAR(255), `title` VARCHAR(255), `description` TEXT, `author` VARCHAR(255), `podcastType` VARCHAR(255), `language` VARCHAR(255), `ownerName` VARCHAR(255), `ownerEmail` VARCHAR(255), `explicit` TINYINT(1), `preventIndexing` TINYINT(1), `coverPath` VARCHAR(255), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `userId` UUID REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);
> CREATE TABLE `feedEpisodes` (`id` UUID PRIMARY KEY, `title` VARCHAR(255), `author` VARCHAR(255), `description` TEXT, `siteURL` VARCHAR(255), `enclosureURL` VARCHAR(255), `enclosureType` VARCHAR(255), `enclosureSize` BIGINT, `pubDate` VARCHAR(255), `season` VARCHAR(255), `episode` VARCHAR(255), `episodeType` VARCHAR(255), `duration` FLOAT, `filePath` VARCHAR(255), `explicit` TINYINT(1), `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `feedId` UUID REFERENCES `feeds` (`id`) ON DELETE CASCADE ON UPDATE CASCADE);
> CREATE TABLE `settings` (`key` VARCHAR(255) PRIMARY KEY, `value` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
Source file schema_v2.4.3.sql to schema_v2.4.4.sql had differences
12a13
> CREATE INDEX `library_items_library_id_media_id_media_type` ON `libraryItems` (`libraryId`, `mediaId`, `mediaType`);
Source file schema_v2.7.0.sql to schema_v2.7.1.sql had differences
8a9
> CREATE INDEX `podcast_episodes_created_at` ON `podcastEpisodes` (`createdAt`);
Source file schema_v2.7.2.sql to schema_v2.8.0.sql had differences
35a36
> CREATE TABLE `customMetadataProviders` (`id` UUID PRIMARY KEY, `name` VARCHAR(255), `mediaType` VARCHAR(255), `url` VARCHAR(255), `authHeaderValue` VARCHAR(255), `extraData` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL);
Source file schema_v2.10.1.sql to schema_v2.11.0.sql had differences
36a37
> CREATE TABLE `mediaItemShares` (`id` UUID PRIMARY KEY, `mediaItemId` UUIDV4, `mediaItemType` VARCHAR(255), `slug` VARCHAR(255), `pash` VARCHAR(255), `expiresAt` DATETIME, `extraData` JSON, `createdAt` DATETIME NOT NULL, `updatedAt` DATETIME NOT NULL, `userId` UUID REFERENCES `users` (`id`) ON DELETE SET NULL ON UPDATE CASCADE);

@nichwall
Copy link
Contributor Author

Thanks for all the effort put into this first migration. I'm not sure yet if this should be merged before v2.14.0 is ready. Sequelize will create the index automatically without a migration and the migration is idempotent so it won't break anything if it is merged early.

Are you meaning server version 2.14.0 is released, and then merging this PR, and if so that the unique index is included in 2.14.0 without the migration?

If so, I don't think we should add the index without the migration, because anyone affected by the duplicate series issue will have errors when updating. I could be mixing things up, though.

@advplyr
Copy link
Owner

advplyr commented Sep 25, 2024

No, we definitely merge this before releasing v2.14.0

@advplyr
Copy link
Owner

advplyr commented Oct 12, 2024

Thanks!

@advplyr advplyr merged commit e58d7db into advplyr:master Oct 12, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Edit series tag issue[Bug]:
3 participants