-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
on running unscheduled HLT modules together with other "steps" #36938
Comments
A new Issue was created by @missirol Marino Missiroli. @Dr15Jones, @perrotta, @dpiparo, @makortel, @smuzaffar, @qliphy can you please review it and eventually sign/assign? Thanks. cms-bot commands are listed here |
assign core,hlt |
New categories assigned: core,hlt @missirol,@Dr15Jones,@smuzaffar,@makortel,@Martin-Grunewald you have been requested to review this Pull request/Issue and eventually sign? Thanks |
Pretty much everything in I'm curious if the issue is limited to output modules. For example, if one adds (for whatever reason) an EDAnalyzer in the same job as the HLT , and the EDAnalyzer reads a data product produced by some intermediate EDProducer, whose execution is currently dictated by an EDFilter. Would the expected behavior be that the EDAnalyzer causes that EDProducer to run or not? |
I think with HLT using tasks (soon, to support GPUs!), the EDAnalyser should trigger execution of the producer, but NO OM should trigger execution of any producer. (HLT is so far not yet using tasks, and then the EDanalyzer should be on an HLT path which runs required producers beforehand in that path). |
We would like to understand better the use case. Maybe we could discuss about this topic in e.g. a core software meeting next week? (in addition of possibly continuing the discussion here) Currently an "unscheduled" module can be executed after all its input data products are either available, or known to be never available, with the exception that an unscheduled module whose output data products are not consumed is ignored (actually destructed early on). "Scheduled" execution (Path) makes a module to be executed even in absence of consumers, and with EDFilters it adds a constraint that a module can be executed only if all preceding EDFilters have accepted the Event. We are wondering what are the benefits for moving EDProducers into Tasks (unscheduled) for HLT that you consider important? ( The high-level question we are thinking about is: Which modules lead to an unscheduled module to be executed? And what would be the relationship of those modules? |
@fwyzard @Sam-Harper - please see this issue! |
@makortel Thanks for your explanations. I don't grasp all the details here, but I should have made one point clearer in the issue's description. In the test in question, all Sequences/Paths in the HLT config were converted to use Tasks, not just those related to GPU modules or SwitchProducers. So, in my understanding, in this test we "over-taskified" (wrt what HLT strictly needs to support GPU modules), and maybe this is part of the problem. I haven't run yet the same type of tests on a version of the HLT menu where Tasks are used only where strictly needed (i.e. paths/sequences with CPU and GPU modules). |
Apart from the interaction with SwitchProducers, ideally the use of Tasks can help simplify the structure of the HLT configuration. Here's an example based on the HLT muon paths (I didn't check why we use the HLT_IsoMu20_v15 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu18 +
hltPreIsoMu20 +
hltL1fL1sMu18L1Filtered0 +
HLTL2muonrecoSequence +
cms.ignore(hltL2fL1sMu18L1f0L2Filtered10Q) +
HLTL3muonrecoSequence +
cms.ignore(hltL1fForIterL3L1fL1sMu18L1Filtered0) +
hltL3fL1sMu18L1f0L2f10QL3Filtered20Q +
HLTL3muonEcalPFisorecoSequenceNoBoolsForMuons +
hltL3fL1sMu18L1f0L2f10QL3Filtered20QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
HLTL3muonHcalPFisorecoSequenceNoBoolsForMuons +
hltL3fL1sMu18L1f0L2f10QL3Filtered20QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
HLTTrackReconstructionForIsoL3MuonIter02 +
hltMuonTkRelIsolationCut0p07Map +
hltL3crIsoL1sMu18L1f0L2f10QL3f20QL3trkIsoFiltered0p07 +
HLTEndSequence )
HLT_IsoMu24_v13 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22 +
hltPreIsoMu24 +
hltL1fL1sMu22L1Filtered0 +
HLTL2muonrecoSequence +
cms.ignore(hltL2fL1sSingleMu22L1f0L2Filtered10Q) +
HLTL3muonrecoSequence +
cms.ignore(hltL1fForIterL3L1fL1sMu22L1Filtered0) +
hltL3fL1sSingleMu22L1f0L2f10QL3Filtered24Q +
HLTL3muonEcalPFisorecoSequenceNoBoolsForMuons +
hltL3crIsoL1sSingleMu22L1f0L2f10QL3f24QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
HLTL3muonHcalPFisorecoSequenceNoBoolsForMuons +
hltL3crIsoL1sSingleMu22L1f0L2f10QL3f24QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
HLTTrackReconstructionForIsoL3MuonIter02 +
hltMuonTkRelIsolationCut0p07Map +
hltL3crIsoL1sSingleMu22L1f0L2f10QL3f24QL3trkIsoFiltered0p07 +
HLTEndSequence )
HLT_IsoMu24_eta2p1_v15 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22 +
hltPreIsoMu24eta2p1 +
hltL1fL1sMu22erL1Filtered0 +
HLTL2muonrecoSequence +
cms.ignore(hltL2fL1sSingleMu22erL1f0L2Filtered10Q) +
HLTL3muonrecoSequence +
cms.ignore(hltL1fForIterL3L1fL1sMu22erL1Filtered0) +
hltL3fL1sSingleMu22erL1f0L2f10QL3Filtered24Q +
HLTL3muonEcalPFisorecoSequenceNoBoolsForMuons +
hltL3crIsoL1sSingleMu22erL1f0L2f10QL3f24QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
HLTL3muonHcalPFisorecoSequenceNoBoolsForMuons +
hltL3crIsoL1sSingleMu22erL1f0L2f10QL3f24QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
HLTTrackReconstructionForIsoL3MuonIter02 +
hltMuonTkRelIsolationCut0p07Map +
hltL3crIsoL1sSingleMu22erL1f0L2f10QL3f24QL3trkIsoFiltered0p07 +
HLTEndSequence )
HLT_IsoMu27_v16 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22or25 +
hltPreIsoMu27 +
hltL1fL1sMu22or25L1Filtered0 +
HLTL2muonrecoSequence +
cms.ignore(hltL2fL1sMu22or25L1f0L2Filtered10Q) +
HLTL3muonrecoSequence +
cms.ignore(hltL1fForIterL3L1fL1sMu22or25L1Filtered0) +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered27Q +
HLTL3muonEcalPFisorecoSequenceNoBoolsForMuons +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered27QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
HLTL3muonHcalPFisorecoSequenceNoBoolsForMuons +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered27QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
HLTTrackReconstructionForIsoL3MuonIter02 +
hltMuonTkRelIsolationCut0p07Map +
hltL3crIsoL1sMu22Or25L1f0L2f10QL3f27QL3trkIsoFiltered0p07 +
HLTEndSequence )
HLT_IsoMu30_v4 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22or25 +
hltPreIsoMu30 +
hltL1fL1sMu22or25L1Filtered0 +
HLTL2muonrecoSequence +
cms.ignore(hltL2fL1sMu22or25L1f0L2Filtered10Q) +
HLTL3muonrecoSequence +
cms.ignore(hltL1fForIterL3L1fL1sMu22or25L1Filtered0) +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered30Q +
HLTL3muonEcalPFisorecoSequenceNoBoolsForMuons +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered30QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
HLTL3muonHcalPFisorecoSequenceNoBoolsForMuons +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered30QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
HLTTrackReconstructionForIsoL3MuonIter02 +
hltMuonTkRelIsolationCut0p07Map +
hltL3crIsoL1sMu22Or25L1f0L2f10QL3f30QL3trkIsoFiltered0p07 +
HLTEndSequence ) can be simplified to HLTIsoMuonTask = cms.Task(
HLTL2muonrecoTask,
HLTL3muonrecoTask,
HLTL3muonEcalPFisorecoTask,
HLTL3muonHcalPFisorecoTask,
HLTTrackReconstructionForIsoL3MuonIter02Task,
hltMuonTkRelIsolationCut0p07Map)
HLT_IsoMu20_v15 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu18 +
hltPreIsoMu20 +
hltL1fL1sMu18L1Filtered0 +
cms.ignore(hltL2fL1sMu18L1f0L2Filtered10Q) +
cms.ignore(hltL1fForIterL3L1fL1sMu18L1Filtered0) +
hltL3fL1sMu18L1f0L2f10QL3Filtered20Q +
hltL3fL1sMu18L1f0L2f10QL3Filtered20QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
hltL3fL1sMu18L1f0L2f10QL3Filtered20QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
hltL3crIsoL1sMu18L1f0L2f10QL3f20QL3trkIsoFiltered0p07 +
HLTEndSequence,
HLTIsoMuonTask)
HLT_IsoMu24_v13 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22 +
hltPreIsoMu24 +
hltL1fL1sMu22L1Filtered0 +
cms.ignore(hltL2fL1sSingleMu22L1f0L2Filtered10Q) +
cms.ignore(hltL1fForIterL3L1fL1sMu22L1Filtered0) +
hltL3fL1sSingleMu22L1f0L2f10QL3Filtered24Q +
hltL3crIsoL1sSingleMu22L1f0L2f10QL3f24QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
hltL3crIsoL1sSingleMu22L1f0L2f10QL3f24QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
hltL3crIsoL1sSingleMu22L1f0L2f10QL3f24QL3trkIsoFiltered0p07 +
HLTEndSequence,
HLTIsoMuonTask)
HLT_IsoMu24_eta2p1_v15 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22 +
hltPreIsoMu24eta2p1 +
hltL1fL1sMu22erL1Filtered0 +
cms.ignore(hltL2fL1sSingleMu22erL1f0L2Filtered10Q) +
cms.ignore(hltL1fForIterL3L1fL1sMu22erL1Filtered0) +
hltL3fL1sSingleMu22erL1f0L2f10QL3Filtered24Q +
hltL3crIsoL1sSingleMu22erL1f0L2f10QL3f24QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
hltL3crIsoL1sSingleMu22erL1f0L2f10QL3f24QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
hltL3crIsoL1sSingleMu22erL1f0L2f10QL3f24QL3trkIsoFiltered0p07 +
HLTEndSequence,
HLTIsoMuonTask)
HLT_IsoMu27_v16 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22or25 +
hltPreIsoMu27 +
hltL1fL1sMu22or25L1Filtered0 +
cms.ignore(hltL2fL1sMu22or25L1f0L2Filtered10Q) +
cms.ignore(hltL1fForIterL3L1fL1sMu22or25L1Filtered0) +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered27Q +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered27QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered27QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
hltL3crIsoL1sMu22Or25L1f0L2f10QL3f27QL3trkIsoFiltered0p07 +
HLTEndSequence,
HLTIsoMuonTask)
HLT_IsoMu30_v4 = cms.Path(
HLTBeginSequence +
hltL1sSingleMu22or25 +
hltPreIsoMu30 +
hltL1fL1sMu22or25L1Filtered0 +
cms.ignore(hltL2fL1sMu22or25L1f0L2Filtered10Q) +
cms.ignore(hltL1fForIterL3L1fL1sMu22or25L1Filtered0) +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered30Q +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered30QL3pfecalIsoRhoFilteredEB0p14EE0p10 +
hltL3fL1sMu22Or25L1f0L2f10QL3Filtered30QL3pfhcalIsoRhoFilteredHB0p16HE0p20 +
hltL3crIsoL1sMu22Or25L1f0L2f10QL3f30QL3trkIsoFiltered0p07 +
HLTEndSequence,
HLTIsoMuonTask)
That... depends :-( [yes, I know DQM modules are technically EDProducers, not EDAnalyzers - but IIRC they only produce tokens in the EndLumi/EndRun transitions, and should behave like EDAnalyzers during the event loop; otherwise, just replace "DQM module" with a generic EDAnalyzer in the discussion that follows] In this example, if we add a DQM module that consumes the tracks associated to the L3 muons and their isolation cone, produced by some modules in the In this case, adding filters in from of the DQM module might be enough: if we require that any of the However, as the information to be monitored in an EDAnalyzer (or "kept" in an OutputModule) gets more complex, using this kind of explicit scheduling becomes impossible. Another case I'm not sure about (again an EDProducer, though) is how do we handle the So, for HLT-related collections, my impression is that we do not want an EDAnalyzer or OutputModule to cause additional modules to be run... do we ? However, as soon as we mix the HLT configuration with the rest of the CMSSW configurations, those parts likely need the current logic that |
Thanks @fwyzard for detailed reply. I used an EDAnalyzer just as an example, and indeed any other module outside of the trigger paths would serve as a similar example purpose as well. It seems to me that the fundamental scheduling issue is not really specific to OutputModules, and that with Tasks there would still be a strong relationship between the modules directly in the Path and the modules in the Task associated to the Path. (in contrast of adding the modules in the Task into a "global pool" of modules that could be run "on demand") How about a Task-like configuration construct that would bind the contained modules strongly to the Path? The modules would be specified as a set (i.e. without order), and framework would figure out their execution order within the Path according to their data dependencies (and if nothing in the Path depends on some module, that module would be ignored/deleted). Any consumption from a module outside of the Path would not cause those modules to be executed. Would anything in the HLT care if the modules in this Task-like construct would, on the C++ side, end up to look like they would be in the Path? This would be visible at least in
|
Seems like a reasonable approach -- but I'll need some time to think about the possible implications, especially when we need to "mix" the HLT with other steps like the L1 emulation, use of DQM modules, etc. @Martin-Grunewald @missirol what do you think ? |
I think it sounds like a good approach (thanks, Matti). If there is a prototype, or a way to test it, I'm happy to give it a try. I'm not the best person to comment on ConfDB, but ConfDB supports Tasks, and if this approach results in just having to use |
Seems ok to work. Indeed I need to correct my statement: adding an EDAnalyzer should not trigger execution of producers, the EDAnalyzer should fail gracefully if the collection is not there. In fact, for the Scouting triggers, we have a set of 'packer modules' which pack up event info in a condensed dataformat. Also these producers, currently run in an EndPath just in front of an OM, should NOT trigger execution of the producers, just pack up what is already in the Event - see discussion with more details here: https://its.cern.ch/jira/browse/CMSHLT-2231 |
Thanks for the feedback! It sounds to me like we could proceed to a prototype (without specific discussion in core sw meeting, unless anyone feels such would be useful) |
Any news on this? |
I have added all the python code needed to add a new I'm in the process of figuring out exactly how I want to pass the python info into the C++. Once I'm happy with that, the last step will be to get the C++ to inject the modules into the correct parts of our scheduling system. |
Conditional Task's PR: #37305 |
The
We are going to work on these (with the |
@makortel @Dr15Jones (cc: @fwyzard) Continuing from the discussion started in #37305 (comment), I'm trying to do one check related to In this check, I took the HLT menu available in CMSSW, customised it with the addition of GPU modules (this is done with The test can be reproduced as follows: cmsrel CMSSW_12_4_X_2022-04-05-1100
cd CMSSW_12_4_X_2022-04-05-1100/src
cmsenv
# Add ConditionalTask
git cms-merge-topic cms-sw:37305
# Changes to use customiseHLTforPatatrack with ConditionalTasks
git cms-merge-topic missirol:hltTest_conditionalTasksInCustomPatatrack
# Build: takes a while, many pkgs are checked out
scram b -j 8
# Run test on a file on lxplus
cmsRun HLTrigger/Configuration/test/OnLine_HLT_GRun.py realData=False globalTag=@ \
inputFiles=file:/afs/cern.ch/work/m/missirol/public/cmssw36938/RelVal_Raw_GRun_MC.root \
&> OnLine_HLT_GRun_withConditionalTasks.log What I see is that:
I hope I'm not missing something obvious. I'd be grateful for any insight. |
Thanks @missirol for the test. The stack strace looks strange (not hinting to anything obvious). I'm taking a look. |
I was able to reproduce the crash, and to craft a simple test case that also crashes when run with >= 2 streams (i.e. 1-thread 2-stream case crashes). It turns out there is a deeper issue in how |
Hi @makortel @Dr15Jones , I wanted to ask feedback on one example config involving This is the config: https://gist.github.com/missirol/34a7ff84d801c9f006fc6cfc9b7a0a27 It is adapted to mimic actual Paths in the latest HLT menu. These Paths ( The example runs in two ways, and we see the following.
@fwyzard tested all cases in this example, incl. changing
Do you think "Case 2" could be fixed to give the expected outcome, i.e. GPU module not executed and not written to output (row 7 in the table)? |
Note that in the table, ❌ is the good behaviour :-) |
IMHO the current behaviour is difficult to reason with, making it hard to use a I think the behaviour should be the same whether a product is consumed directly (e.g. The behaviour should also not depend on whether the consuming module is actually run, or prevented to run by a failing filter. |
Maybe we're missing a check for the explicit consumption of |
The culprit wasn't exactly that, but a logic error (or oversight) for how the non-chosen |
Thanks @makortel ! |
Hi @Dr15Jones , Matti , while testing recent HLT menus with I think I managed to translate the issue into a minimal example, which you find below [*]. Running ----- Begin Fatal Exception 09-Jun-2022 10:29:47 CEST-----------------------
An exception of category 'ScheduleExecutionFailure' occurred while
[0] Calling beginJob
Exception Message:
Unrunnable schedule
The Path/EndPath configuration could cause the job to deadlock
module 'a1' is on path 'thePath' and depends on module 'intProducerCPU'
module 'intProducerCPU' is on path 'thePath' and follows module 'a1' on the path
----- End Fatal Exception ------------------------------------------------- The error is related to the branch of the SwitchProducer which is not enabled (because the test uses "enableGPU"). One finds that any one of the following changes will make the configuration work:
Could you please have a look? [*] # cfg.py
import FWCore.ParameterSet.Config as cms
import sys
enableGPU = (sys.argv[-1] == 'enableGPU')
class SwitchProducerTest(cms.SwitchProducer):
def __init__(self, **kargs):
super(SwitchProducerTest,self).__init__(
dict(
cpu = lambda accelerators: (True, -10),
gpu = lambda accelerators: (enableGPU, -9)
), **kargs)
process = cms.Process('TEST')
process.maxEvents.input = 10
process.options.numberOfThreads = 1
process.options.numberOfStreams = 1
process.options.numberOfConcurrentRuns = 1
process.options.numberOfConcurrentLuminosityBlocks = 1
process.source = cms.Source('EmptySource')
process.intProducerCPU = cms.EDProducer('ManyIntProducer', ivalue = cms.int32(1))
process.intProducerGPU = cms.EDProducer('ManyIntProducer', ivalue = cms.int32(2))
process.intProducer = SwitchProducerTest(
cpu = cms.EDAlias(intProducerCPU = cms.VPSet(cms.PSet(type = cms.string('*')))),
gpu = cms.EDAlias(intProducerGPU = cms.VPSet(cms.PSet(type = cms.string('*')))),
)
process.f = cms.EDFilter('HLTBool',
result = cms.bool(enableGPU)
)
process.t = cms.ConditionalTask(
process.intProducerCPU,
process.intProducerGPU,
process.intProducer,
)
process.a1 = cms.EDAnalyzer('GenericConsumer',
eventProducts = cms.untracked.vstring(
'intProducer@cpu',
'intProducer@gpu',
),
lumiProducts = cms.untracked.vstring(),
runProducts = cms.untracked.vstring(),
)
process.s1 = cms.Sequence(process.t)
process.p2 = cms.EDProducer("AddIntsProducer", labels = cms.VInputTag("intProducer@cpu","intProducer@gpu"))
process.s2 = cms.Sequence(process.p2)
process.thePath = cms.Path(
process.f
+ process.s1
+ process.a1
+ process.s2
) |
I forgot to add that the test in #36938 (comment) was done in |
False alarm. The problem was that I was testing in @fwyzard clarified to me that the issue is not present in This is most likely because #38006 (#38015) not only fixes #36938 (comment) , but also #36938 (comment). I also checked that the non-minimal example (i.e. the test with the actual HLT menu) works in The conclusion would be that, for HLT's purposes, Sorry for the noise! |
@missirol glad you were able to solve the problem! Also go to hear that the last version of ConditionalTask is working for you. |
(I continue in this issue, since much about Hi @makortel @Dr15Jones , during HLT-menu development (CMSHLT-2454), we came across a runtime error that seems related to Error:
The config is attached; the test was done in The error occurs when the A recent change we made is that now one SwitchProducer has a branch with 2 hltEcalDigis = SwitchProducerCUDA(
cpu = cms.EDAlias(
hltEcalDigisLegacy = cms.VPSet(
[..]
)
),
cuda = cms.EDAlias(
hltEcalDigisFromGPU = cms.VPSet(
[..]
),
hltEcalDigisLegacy = cms.VPSet(
[..]
)
),
) With any one of the following 3 changes, the runtime error does not occur.
Questions:
|
Thanks @missirol for the report, we are taking a look. I was able to reproduce the problem, and from first poking around the cause for the exception is not immediately clear. I see |
I think I have identified the problem, and it is related to hltEcalDigisFromGPU = cms.VPSet(cms.PSet(
type = cms.string('*')
)), and that a certain Assuming there is nothing else going on, specifying the products aliased from |
Thanks, Matti. This works and gives us a good quick solution, I think. |
#39409 has a fix. I'll make backports to 12_5_X and 12_4_X after we settle on the content. |
#43296 implements the printout of ConditionalTask modules that are not consumed by any module in any of the Paths the ConditionalTask (in question) is associated with, as discussed with some of the HLT folks. Per that discussion this PR is likely the last development for ConditionalTask (unless some new use case pops up). |
Just for record, this discussion happened here (details in the minutes). |
This issue is related to #36138 (as suggested by @silviodonato, I open here a separate issue, in an attempt to make things clearer).
TSG plans to introduce Tasks in the HLT menus for Run 3, as a necessary step to support configurations with both CPU and GPU modules handled via
SwitchProducers
.#36138 showed that the unscheduled execution of HLT modules could become problematic if triggered prematurely by OutputModules ("prematurely" according to HLT's needs); this was addressed with the introduction of
FinalPath
by Core-Sw experts.It was recently verified that
FinalPath
does solve the issue originally discussed in #36138 when running a standalone HLT configuration.However, there might still be a problem in wfs where HLT needs to run together with other parts of the reconstruction (or, "steps").
For example, there are TSG tests where HLT runs together with L1T-reco (L1 step) and/or Offline-reco (RECO step), and these tests currently fail if HLT uses unscheduled modules (L1 and RECO are just used as examples here; this might apply to other "steps" as well).
The technical reason is that, in such cases,
cmsDriver
adds OutputModules onEndPaths
. This clashes with the needs of an HLT-with-Tasks (the original problem with using EndPaths). On the other hand, simply converting these EndPaths to FinalPaths could hinder the execution of L1-or-RECO modules (in the assumption that L1-or-RECO expect the execution of certain modules to happen only via the request of an OutputModule).To reproduce the issue one can do the following:
This issue was first discussed in #36878 (comment), where some details on the error messages were also given.
Question is: if HLT is to use Tasks, what are the possible solutions to the issue above, when the configuration contains OutputModules and runs HLT together with other "steps"?
FYI: @silviodonato @Martin-Grunewald @Sam-Harper @fwyzard
The text was updated successfully, but these errors were encountered: