-
Notifications
You must be signed in to change notification settings - Fork 3.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add script for data integrity after postgres migration #37998
Conversation
WalkthroughA new script file Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (3)
app/client/packages/rts/src/ctl/verify-migration.mjs (3)
29-29
: Remove unused variablehasDiscrepancy
The variable
hasDiscrepancy
is set but not used elsewhere. You can safely remove it to clean up the code.Apply this diff:
-let hasDiscrepancy = false; ... -if (pgRecord.rows.length === 0) { missingInPostgres.push(mongoDoc.id); - hasDiscrepancy = true; }Also applies to: 63-63
15-15
: Align usage messages for consistencyThe usage comment and error message differ slightly. Update them to provide consistent instructions to the user.
Apply this diff:
-// usage node verify-migration.mjs --mongodb-url="mongodb://localhost:27017/dbname" --postgres-url="postgresql://user:password@localhost:5432/dbname" +// Usage: node verify-migration.mjs --mongodb-url="<url>" --postgres-url="<url>" ... -console.error('Usage: node verify-migration.mjs --mongodb-url=<url> --postgres-url=<url>'); +console.error('Usage: node verify-migration.mjs --mongodb-url="<url>" --postgres-url="<url>"');Also applies to: 134-134
138-138
: Remove unnecessary error handlingErrors within
verifyMigration
are already caught and handled. The.catch(console.error)
is redundant and can be removed.Apply this diff:
-verifyMigration(mongoUrl, pgUrl).catch(console.error); +verifyMigration(mongoUrl, pgUrl);
const allMongoDocs = await mongoDb | ||
.collection(collectionName) | ||
.find({}) | ||
.toArray(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Optimize data fetching for large collections
Fetching all documents with toArray()
can consume a lot of memory for large collections. Consider using cursors or processing documents in batches to handle large datasets efficiently.
const pgDocs = await pgClient.query( | ||
`SELECT id FROM ${pgTableName} WHERE "deletedAt" IS NULL` | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve PostgreSQL query efficiency
Retrieving all rows at once may cause performance issues with large tables. Use pagination or streaming to handle large datasets without overwhelming memory resources.
const pgRecord = await pgClient.query( | ||
`SELECT id FROM ${pgTableName} WHERE id = $1 AND "deletedAt" IS NULL`, | ||
[mongoDoc.id] | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sanitize table names to prevent SQL injection
Directly interpolating pgTableName
into SQL queries can lead to SQL injection if collectionName
contains malicious input. Validate collectionName
to ensure it contains only safe characters before using it in queries.
Apply this diff to sanitize the table name:
+const sanitizedTableName = collectionName.replace(/[^a-zA-Z0-9_]/g, '').toLowerCase();
-const pgTableName = collectionName.toLowerCase();
+const pgTableName = sanitizedTableName;
Also applies to: 68-70
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we resolve comments from bot, rest looks good.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Outside diff range and nitpick comments (3)
app/client/packages/rts/src/ctl/verify-migration.mjs (3)
5-7
: Add input validation to isArchivedObject functionThe function should handle edge cases like undefined/null input and validate field types.
function isArchivedObject(doc) { + if (!doc || typeof doc !== 'object') return false; return doc.deleted === true || doc.deletedAt != null; }
192-194
: Improve object comparison logicUsing JSON.stringify for comparison can be unreliable with different property orders.
- return JSON.stringify(value1) === JSON.stringify(value2); + if (Array.isArray(value1)) { + return value1.length === value2.length && + value1.every((v, i) => isEquivalent(v, value2[i])); + } + const keys1 = Object.keys(value1); + const keys2 = Object.keys(value2); + return keys1.length === keys2.length && + keys1.every(key => isEquivalent(value1[key], value2[key]));Also applies to: 197-199
208-214
: Enhance command-line argument handlingAdd URL validation and environment variable support.
+function isValidUrl(str) { + try { + new URL(str); + return true; + } catch { + return false; + } +} for (const arg of args) { if (arg.startsWith('--mongodb-url=')) { - mongoUrl = arg.split('=')[1]; + mongoUrl = arg.split('=')[1] || process.env.MONGODB_URL; + if (!isValidUrl(mongoUrl)) { + console.error('Invalid MongoDB URL'); + process.exit(1); + } } else if (arg.startsWith('--postgres-url=')) { - pgUrl = arg.split('=')[1]; + pgUrl = arg.split('=')[1] || process.env.POSTGRES_URL; + if (!isValidUrl(pgUrl)) { + console.error('Invalid PostgreSQL URL'); + process.exit(1); + } } }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (1)
app/client/packages/rts/src/ctl/verify-migration.mjs
(1 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
app/client/packages/rts/src/ctl/verify-migration.mjs
[error] 175-175: Do not access Object.prototype method 'hasOwnProperty' from target object.
It's recommended using Object.hasOwn() instead of using Object.hasOwnProperty().
See MDN web docs for more details.
(lint/suspicious/noPrototypeBuiltins)
🔇 Additional comments (2)
app/client/packages/rts/src/ctl/verify-migration.mjs (2)
41-41
:
Prevent SQL injection by sanitizing table names
Direct string interpolation of table names is unsafe.
-const pgTableName = collectionName.toLowerCase();
+const sanitizedTableName = collectionName.replace(/[^a-zA-Z0-9_]/g, '').toLowerCase();
+const pgTableName = sanitizedTableName;
54-59
: 🛠️ Refactor suggestion
Optimize memory usage for large collections
The current implementation loads entire batch into memory with toArray().
-const mongoDocs = await mongoDb
- .collection(collectionName)
- .find({ deleted: { $ne: true }, deletedAt: null })
- .skip(processedCount)
- .limit(BATCH_SIZE)
- .toArray();
+const cursor = mongoDb
+ .collection(collectionName)
+ .find({ deleted: { $ne: true }, deletedAt: null })
+ .skip(processedCount)
+ .limit(BATCH_SIZE);
+
+while (await cursor.hasNext()) {
+ const mongoDoc = await cursor.next();
// Process single document
+}
Likely invalid or redundant comment.
@AnaghHegde any reason we are not validating the fields introduced after pg is active e.g. workflow.schedules? |
@abhvsn I am checking all the fields for each record instead of one specific field. This would be better and we don't have to keep track of new changes that may come in due to the migration. Check the |
This PR has not seen activitiy for a while. It will be closed in 7 days unless further activity is detected. |
…horg#37998) ## Description Verify the migrated data betwen pg and mongo to confirm if all the documents have been successfully migrated. ## Automation /ok-to-test tags="" ### 🔍 Cypress test results <!-- This is an auto-generated comment: Cypress test results --> > [!WARNING] > Tests have not run on the HEAD eca0f59 yet > <hr>Mon, 09 Dec 2024 04:45:49 UTC <!-- end of auto-generated comment: Cypress test results --> ## Communication Should the DevRel and Marketing teams inform users about this change? - [ ] Yes - [ ] No <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit - **New Features** - Introduced a new script to verify data integrity between MongoDB and PostgreSQL databases. - Added functionality to check for discrepancies in document records across both databases. - **Bug Fixes** - Improved error handling for connection issues and verification failures. - **Documentation** - Enhanced command-line argument parsing for database connection URLs. <!-- end of auto-generated comment: release notes by coderabbit.ai -->
Description
Verify the migrated data betwen pg and mongo to confirm if all the documents have been successfully migrated.
Automation
/ok-to-test tags=""
🔍 Cypress test results
Warning
Tests have not run on the HEAD eca0f59 yet
Mon, 09 Dec 2024 04:45:49 UTC
Communication
Should the DevRel and Marketing teams inform users about this change?
Summary by CodeRabbit
New Features
Bug Fixes
Documentation