Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update runScript.js and readme #9

Draft
wants to merge 9 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
55 changes: 31 additions & 24 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,42 @@ on:
- '**/' # Listen for changes in any directory

jobs:
test:
test-adapters:
runs-on: ubuntu-latest
env:
FOLDER_NAME: ""

steps:
- uses: actions/[email protected]
- uses: jitterbit/get-changed-files@v1
id: abc
- name: Checkout repository
uses: actions/checkout@v2

# Get changed files in the adapters folder
- name: Get changed files in adapters folder
id: changed_files_adapters
uses: Ana06/[email protected]
with:
format: space-delimited
token: ${{ secrets.GITHUB_TOKEN }}
- name: Extract folder name
filter: 'adapters/**/*'

# Extract unique folder names from changed files
- name: Extract changed folders
id: extract_folders
run: |
added="${{ steps.abc.outputs.added }}"
folder=$(echo "$added" | awk -F/ '{print $2}' | sort -u)
echo "FOLDER_NAME=$folder" >> $GITHUB_ENV
- name: Print folder name
# Extract unique folder names from changed files
changed_folders=$(echo "${{ steps.changed_files_adapters.outputs.files }}" | awk -F/ '{print $2}' | sort -u)
echo "::set-output name=changed_folders::$changed_folders"

- name: Print changed folders
run: |
echo "Folder name: $FOLDER_NAME"
- name: Checkout PR
uses: actions/[email protected]
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Run command from protocol folder
echo "Changed folders: ${{ steps.extract_folders.outputs.changed_folders }}"

# Loop through changed folders and run commands
- name: Run commands in each folder
run: |
cd adapters
cd $FOLDER_NAME
npm install
npm run compile
cd ..
npm install
npm run start $FOLDER_NAME
for folder in ${{ steps.extract_folders.outputs.changed_folders }}; do
cd adapters/$folder
npm install
npm run compile
cd ../../
npm install
npm run start $folder
done
67 changes: 44 additions & 23 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,25 @@
# scroll-adapters

## TVL by User - Adapters

### Onboarding Checklist

Please complete the following:

1. Set up a subquery indexer (e.g. Goldsky Subgraph)
1. Follow the docs here: https://docs.goldsky.com/guides/create-a-no-code-subgraph
1. Set up a subquery indexer (e.g. Goldsky Subgraph)
1. Follow the docs here: <https://docs.goldsky.com/guides/create-a-no-code-subgraph>
2. General Steps
1. create an account at app.goldsky.com
2. deploy a subgraph or migrate an existing subgraph - https://docs.goldsky.com/subgraphs/introduction
3. Use the slugs `scroll-testnet` and `scroll` when deploying the config
2. Prepare Subquery query code according to the Data Requirement section below.
3. Submit your response as a Pull Request to: https://github.com/delta-hq/scroll-adapters
1. With path being `/<your_protocol_handle>`

1. create an account at app.goldsky.com
2. deploy a subgraph or migrate an existing subgraph - <https://docs.goldsky.com/subgraphs/introduction>
3. Use the slugs `scroll-testnet` and `scroll` when deploying the config
2. Prepare Subquery query code according to the Data Requirement section below.
3. Submit your response as a Pull Request to: <https://github.com/delta-hq/scroll-adapters>
1. With path being `/<your_protocol_handle>`

### Code Changes Expected

1. Create a function like below:

```
export const getUserTVLByBlock = async (blocks: BlockData) => {
const { blockNumber, blockTimestamp } = blocks
Expand All @@ -28,14 +30,18 @@ Please complete the following:

};
```

2. Interface for input Block Data is, in below blockTimestamp is in epoch format.
```

```
interface BlockData {
blockNumber: number;
blockTimestamp: number;
}
```
3. Output "csvRow" is a list.

3. Output "csvRow" is a list.

```
const csvRows: OutputDataSchemaRow[] = [];

Expand All @@ -49,16 +55,18 @@ const csvRows: OutputDataSchemaRow[] = [];
usd_price: number; //assign 0 if not available
};
```
4. Make sure you add relevant package.json and tsconfig.json

4. Make sure you add relevant package.json and tsconfig.json

### Data Requirement

Goal: **Hourly snapshot of TVL by User by Asset**

For each protocol, we are looking for the following:
1. Query that fetches all relevant events required to calculate User TVL in the Protocol at hourly level.
2. Code that uses the above query, fetches all the data and converts it to csv file in below given format.
3. Token amount should be raw token amount. Please do not divide by decimals.
For each protocol, we are looking for the following:

1. Query that fetches all relevant events required to calculate User TVL in the Protocol at hourly level.
2. Code that uses the above query, fetches all the data and converts it to csv file in below given format.
3. Token amount should be raw token amount. Please do not divide by decimals.

Teams can refer to the example we have in there to write the code required.

Expand All @@ -74,7 +82,6 @@ Teams can refer to the example we have in there to write the code required.
| token_balance | Balance of token (**If the token was borrowed, this balance should be negative**) |
| usd_price (from oracle) | Price of token (optional) |


Sample output row will look like this:

| blocknumber | timestamp | user_address | token_address | token_balance | token_symbol (optional) | usd_price(optional)|
Expand All @@ -84,11 +91,14 @@ Sample output row will look like this:
Note: **Expect multiple entries per user if the protocols has more than one token asset**

### index.ts

On this scope, the code must read a CSV file with headers named `hourly_blocks.csv` that contains the following columns:

- `number` - block number
- `timestamp` - block timestamp

And output a CSV file named `outputData.csv` with headers with the following columns:

- `block_number` - block number
- `timestamp` - block timestamp
- `user_address` - user address
Expand All @@ -99,26 +109,37 @@ And output a CSV file named `outputData.csv` with headers with the following col
e.g. `adapters/renzo/src/index.ts`

For testing the adapter code for a single hourly block, use the following `hourly_blocks.csv` file:
```

```
number,timestamp
4243360,1714773599
```

### Adapter Example

In this repo, there is an adapter example. This adapter aims to get data positions from the subrgaph and calculate the TVL by users.
The main scripts is generating a output as CSV file.

[Adapter Example](adapters/example/dex/src/index.ts)

## Notes

1. Please make sure to have a "compile" script in package.json file. So, we are able to compile the typescript files into `dist/index.js` file.

## How to execute this project?

```
npm install // install all packages
npm run watch //other terminal tab
npm run start // other terminal tab
**Please** ensure you have `adapters/{PROJECT_FOLDER}/hourly_blocks.csv` comma-delineated.

```bash
cd adapters
npm i # install all packages in main repo

cd {PROJECT_FOLDER}
npm i
npm run compile

cd ..
npm run start # run runScript.js (this runs in the ci/cd)
```

By this, we'll be able to generate the output csv file.
These commands will run your project, and they mimic the commands found in the ci/cd [workflow](.github/workflows/test.yml).
29 changes: 0 additions & 29 deletions adapters/layerbank/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -97,32 +97,3 @@ const readBlocksFromCSV = async (filePath: string): Promise<BlockData[]> => {

return blocks;
};

readBlocksFromCSV("hourly_blocks.csv")
.then(async (blocks: any[]) => {
console.log(blocks);
const allCsvRows: any[] = []; // Array to accumulate CSV rows for all blocks

for (const block of blocks) {
try {
const result = await getUserTVLByBlock(block);
for (let i = 0; i < result.length; i++) {
allCsvRows.push(result[i]);
}
} catch (error) {
console.error(`An error occurred for block ${block}:`, error);
}
}
await new Promise((resolve, reject) => {
const ws = fs.createWriteStream(`outputData.csv`, { flags: "w" });
write(allCsvRows, { headers: true })
.pipe(ws)
.on("finish", () => {
console.log(`CSV file has been written.`);
resolve;
});
});
})
.catch((err) => {
console.error("Error reading CSV file:", err);
});
113 changes: 71 additions & 42 deletions adapters/runScript.js
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
// runScript.js
const fs = require('fs');
const path = require('path');

const csv = require('csv-parser');
const { write } =require('fast-csv');
const { write } = require('fast-csv');

// Get the folder name from command line arguments
const folderName = process.argv[2];
Expand All @@ -21,27 +21,31 @@ if (!fs.existsSync(folderPath)) {
process.exit(1);
}

// Check if the provided folder contains index.ts file
// Check if the provided folder contains dist/index.js file
const indexPath = path.join(folderPath, 'dist/index.js');
if (!fs.existsSync(indexPath)) {
console.error(`Folder '${folderName}' does not contain index.ts file.`);
console.error(`Folder '${folderName}' does not contain dist/index.js file. Please compile index.ts`);
process.exit(1);
}

// Import the funct function from the provided folder
// Import the getUserTVLByBlock function from the provided folder
const { getUserTVLByBlock } = require(indexPath);

const readBlocksFromCSV = async (filePath) => {
const blocks = [];

await new Promise((resolve, reject) => {
fs.createReadStream(filePath)
.pipe(csv({ separator: '\t' })) // Specify the separator as '\t' for TSV files
fs.createReadStream(filePath, { encoding: 'utf8' })
.pipe(csv({
separator: ',', // Specify the separator as '\t'
mapHeaders: ({ header, index }) => header.trim() // Trim headers to remove any leading/trailing spaces
}))
.on('data', (row) => {
console.log(row)
const blockNumber = parseInt(row.number, 10);
const blockTimestamp = parseInt(row.block_timestamp, 10);
const blockTimestamp = parseInt(row.timestamp, 10);
if (!isNaN(blockNumber) && blockTimestamp) {
blocks.push({ blockNumber: blockNumber, blockTimestamp });
blocks.push({ blockNumber, blockTimestamp });
}
})
.on('end', () => {
Expand All @@ -55,40 +59,65 @@ const readBlocksFromCSV = async (filePath) => {
return blocks;
};

readBlocksFromCSV('block_numbers.tsv')
.then(async (blocks) => {
const allCsvRows = []; // Array to accumulate CSV rows for all blocks
const batchSize = 10; // Size of batch to trigger writing to the file
let i = 0;

for (const block of blocks) {
try {
const result = await getUserTVLByBlock(block);

// Accumulate CSV rows for all blocks
allCsvRows.push(...result);

i++;
console.log(`Processed block ${i}`);

// Write to file when batch size is reached or at the end of loop
if (i % batchSize === 0 || i === blocks.length) {
const ws = fs.createWriteStream(`${folderName}/outputData.csv`, { flags: i === batchSize ? 'w' : 'a' });
write(allCsvRows, { headers: i === batchSize ? true : false })
.pipe(ws)
.on("finish", () => {
console.log(`CSV file has been written.`);
});

// Clear the accumulated CSV rows
allCsvRows.length = 0;
// Log the full path to the CSV file
const csvFilePath = path.join(folderPath, 'hourly_blocks.csv');
console.log(`Looking for hourly_blocks.csv at: ${csvFilePath}`);

// Additional check for file existence before proceeding
if (!fs.existsSync(csvFilePath)) {
console.error(`File '${csvFilePath}' does not exist.`);
process.exit(1);
}

// Main function to coordinate the processing
const main = async () => {
try {
const blocks = await readBlocksFromCSV(csvFilePath);

console.log('Blocks read from CSV:', blocks);

const allCsvRows = []; // Array to accumulate CSV rows for all blocks
const batchSize = 10; // Size of batch to trigger writing to the file
let i = 0;

for (const block of blocks) {
try {
const result = await getUserTVLByBlock(block);

console.log(`Result for block ${block.blockNumber}:`, result); // Print the result for verification

// Accumulate CSV rows for all blocks
allCsvRows.push(result);

i++;
console.log(`Processed block ${i}`);

// Write to file when batch size is reached or at the end of loop
if (i % batchSize === 0 || i === blocks.length) {
const ws = fs.createWriteStream(`${folderName}/outputData.csv`, { flags: i === batchSize ? 'w' : 'a' });
write(allCsvRows, { headers: i === batchSize ? true : false })
.pipe(ws)
.on("finish", () => {
console.log(`CSV file has been written.`);
});

// Clear the accumulated CSV rows
allCsvRows.length = 0;
}
} catch (error) {
console.error(`An error occurred for block ${block.blockNumber}:`, error);
}
} catch (error) {
console.error(`An error occurred for block ${block}:`, error);
}

} catch (err) {
console.error('Error reading CSV file:', err);
}
})
.catch((err) => {
console.error('Error reading CSV file:', err);
};

// Run the main function and ensure the process waits for it to complete
main().then(() => {
console.log('Processing complete.');
process.exit(0);
}).catch((err) => {
console.error('An error occurred:', err);
process.exit(1);
});
Loading
Loading