Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solution Interpretation #20

Open
Abhishekravindran opened this issue May 29, 2024 · 0 comments
Open

Solution Interpretation #20

Abhishekravindran opened this issue May 29, 2024 · 0 comments

Comments

@Abhishekravindran
Copy link

Abhishekravindran commented May 29, 2024

Hii, how do we inference the F1 score which has been provided in the paper for nested NER task and why is the results on the below average side as below when i run as per the steps mentioned in the github for evaluation.

--- NER ---

type precision recall f1-score support

ORG 0.00 0.00 0.00 508
PER 20.42 2.28 4.11 1270
WEA 0.00 0.00 0.00 41
VEH 0.00 0.00 0.00 12
GPE 4.00 0.19 0.35 540
FAC 0.00 0.00 0.00 67
LOC 0.00 0.00 0.00 76

micro 17.96 1.19 2.24 2514
macro 3.49 0.35 0.64 2514

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant