-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to latest go-datastore. Remove thirdparty/datastore2 #4742
Conversation
ba8c809
to
a6a637d
Compare
I think I'm too tired to figure out why this test fails:
I will try again tomorrow but if anyone has a tip in the meantime, it's welcome. |
I have spent considerable time and haven't found why It would seem that now the expired record goes away from the datastore and cannot be republished once expired. This did not happen before. After looking a lot around I'm not sure where this behavior change comes from, or if it's a feature or a bugfix. @Stebalien any idea? Reverting the This makes sharness tests fail too. |
So the record is being removed by kad-dht when a get message arrives to one of the peers (I assume to the peer who published, it's not easy to check with debugger), it then gets to checking if the record is still valid: Which is this bit of code: var recordIsBad bool
recvtime, err := u.ParseRFC3339(rec.GetTimeReceived())
if err != nil {
log.Info("either no receive time set on record, or it was invalid: ", err)
recordIsBad = true
}
if time.Now().Sub(recvtime) > MaxRecordAge {
log.Debug("old record found, tossing.")
recordIsBad = true
}
// NOTE: We do not verify the record here beyond checking these timestamps.
// we put the burden of checking the records on the requester as checking a record
// may be computationally expensive
if recordIsBad {
err := dht.datastore.Delete(dskey)
if err != nil {
log.Error("Failed to delete bad record from datastore: ", err)
}
return nil, nil // can treat this as not having the record at all
} The interesting thing is that looking at git blame, it has been there since 2 years. I need to dig a bit deeper to find why/how it did work before. |
It's caused by the removal of a check in this commit The check was in // if its our record, dont bother checking the times on it
if peer.ID(rec.GetAuthor()) == dht.self {
return rec, nil
} Without it we will remove our own records if they are expired, which wasn't the case before. (btw, did this check work with keystore keys?) I think it has to do with the IPRS refactoring, it's not merged yet though, so when the breaking changes got pulled in here, things obviously broke. cc @dirkmc |
That change was made because we removed the signature and author fields from the record, so it's no longer possible to check who the creator of the record was: #4613 |
Thanks @magik6k ! My question here: is losing our own expired records an unintended behavior or not ? @dirkmc If unintended: how is this going to be addressed (given that
I don't fully understand (maybe I'm missing context), but the republisher in this check does not carry a keystore (it's |
It was not my intention when I made this change to lose our own expired records. I'm not really sure what the correct answer is here. @whyrusleeping @vyzo @Stebalien do you have any thoughts? |
that was certainly not anticipated... we debated deleterious effect of the removal of signatures, but didn't see that one coming! |
Hrm... yeah. We need to retain expired records for records that belong to keys we own. This is so that the republisher can continue functioning even in the absence of a still valid record (what if your node goes offline for a few days). Maybe we should store them in some other way? like a separate place in the datastore for "things we publish". The code that @dirkmc removed was a pretty gnarly hack in the first place. |
This seems to overpass the purpose of this PR (and blocks me). Is it ok if I: a) Open issues for this problem to make sure that it's not forgotten ? |
@hsanjuan okay, go for it. Lets work on getting the records issue fixed separately. |
I'm happy to work on fixing the republisher. Let's discuss in the separate issue |
Issue created here: #4749 |
671de56
to
4f164bc
Compare
This is ready on my side. |
Who would be so kind to review this? @whyrusleeping @Stebalien @magik6k 🥇 I guess it will need to sit until stable 0.4.14 is out? Thanks in advance anyway |
@hsanjuan it will for sure wait until 0.4.15 dev cycle starts. Merge window for 0.4.14 is closed AFAIK. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
go-ipfs-blockstore appears to have wrong hash somewhere - see the CI fail - https://ci.ipfs.team/blue/organizations/jenkins/go-ipfs/detail/PR-4742/13/pipeline
I need to rebase |
578e4c8
to
3110b02
Compare
3110b02
to
c03b1b0
Compare
License: MIT Signed-off-by: Hector Sanjuan <[email protected]>
License: MIT Signed-off-by: Hector Sanjuan <[email protected]>
License: MIT Signed-off-by: Hector Sanjuan <[email protected]>
This uses a working libp2p-kad-dht and libp2p-record libraries, reverts the changes that were introduced to support the newer versions License: MIT Signed-off-by: Hector Sanjuan <[email protected]>
License: MIT Signed-off-by: Hector Sanjuan <[email protected]>
License: MIT Signed-off-by: Hector Sanjuan <[email protected]>
c03b1b0
to
e8c6308
Compare
Rebased. I think things should be good now. |
Update to latest go-datastore. Remove thirdparty/datastore2 This commit was moved from ipfs/kubo@3f6519b
There are 3 commits here:
go-libp2p-record
not signing records anymore.¨ change, which requires some fixes in some tests innamesys
.delayed
datastore inexchange
, rather than thirdparty/datastore2. This datastore was added to go-datastore too.ds2.ThreadSafeCloserMapDatastore()
withsyncds.MutexWrap(datastore.NewMapDatastore())
. I am not sure what was the point of theCloser
part, butsyncds
already implements the ThreadSafe part. If I screwed something, it will hopefully show up in the tests (this only affects test code).