-
-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scheduler, pathmanager, congestion control #300
Comments
This MPTCP Upstream implementation is a re-work of the previous one (MPTCP Out of tree / fork) to fit upstream requirements. Some choices have been made and the design needed to be different. So it is normal to have differences. Regarding the extensions, the global idea is to move the customisation part to userspace. In other words, we try to avoid having multiple path-manager and packet schedulers with many different options in kernelspace because it is complex to maintain, especially when there are multiple stable releases in place. It is also quicker to develop them in userspace when the API with the kernel is in place. Yet it is already possible to adapt MPTCP behaviours to your needs:
If something is missing, it is probably because nobody needed it before (or reported it or better: implemented it). |
Hi @michalmorawski , Does my previous reply answer your questions? Can we close this ticket? |
I suggest to close this ticket. Feel free to re-open it if my previous reply was not clear enough. |
Hi @matttbe Can you please clarify a bit more about the two path manager types you mentioned above, i.e., 'in-kernel' and 'userspace'? |
Hi @Suvam77
When you use the 'in-kernel' path manager, the kernel will do some actions depending on the endpoints you added per net namespace: With the
Network Manager 1.40.0 (and higher) can also manage the endpoints but here in a more "automatic way". mptcpd is a bit particular because it can be used with both the "in-kernel" path-manager and the "userspace" one. Indeed, depending on the plugin, it can control the endpoints for the "in-kernel" path manager but plugins can also be written to use the "userspace" path manager and have a finer control from the userspace, e.g. reacting when a new MPTCP connection is being created. |
Yes, of course. Sorry for my silence. I am waiting for Scheduler example written as BPF. Eagerly 😊. I have seen, it is already on the bakclog table.
Michał.
From: Matthieu ***@***.***>
Sent: piątek, 16 września 2022 18:33
To: ***@***.***>
Cc: Michał Morawski ***@***.***>; ***@***.***>
Subject: Re: [multipath-tcp/mptcp_net-next] Scheduler, pathmanager, congestion control (Issue #300)
Hi @michalmorawski<https://github.com/michalmorawski> ,
Does my previous reply answer your questions? Can we close this ticket?
—
Reply to this email directly, view it on GitHub<#300 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AGK2KGLGJTVBNV66OXMZVJDV6SOOPANCNFSM6AAAAAAQH4DQRY>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
…________________________________
[Politechnika Łódzka / Lodz University of Technology] [Facebook] <https://www.facebook.com/Politechnika.Lodzka/> [Twitter] <https://twitter.com/p_lodz_pl> [Youtube] <https://www.youtube.com/channel/UCbBnxpxg1DbSrIBwvebJghg> [Linkedin] <https://www.linkedin.com/school/politechnika-łódzka/>
________________________________
Treść tej wiadomości zawiera informacje przeznaczone tylko dla adresata. Jeżeli nie jesteście Państwo jej adresatem, bądź otrzymaliście ją przez pomyłkę, prosimy o powiadomienie o tym nadawcy oraz trwałe jej usunięcie.
This email contains information intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or if you have received this message in error, please notify the sender and delete it from your system.
|
DC driver is using two different values to define the maximum number of surfaces: MAX_SURFACES and MAX_SURFACE_NUM. Consolidate MAX_SURFACES as the unique definition for surface updates across DC. It fixes page fault faced by Cosmic users on AMD display versions that support two overlay planes, since the introduction of cursor overlay mode. [Nov26 21:33] BUG: unable to handle page fault for address: 0000000051d0f08b [ +0.000015] #PF: supervisor read access in kernel mode [ +0.000006] #PF: error_code(0x0000) - not-present page [ +0.000005] PGD 0 P4D 0 [ +0.000007] Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI [ +0.000006] CPU: 4 PID: 71 Comm: kworker/u32:6 Not tainted 6.10.0+ #300 [ +0.000006] Hardware name: Valve Jupiter/Jupiter, BIOS F7A0131 01/30/2024 [ +0.000007] Workqueue: events_unbound commit_work [drm_kms_helper] [ +0.000040] RIP: 0010:copy_stream_update_to_stream.isra.0+0x30d/0x750 [amdgpu] [ +0.000847] Code: 8b 10 49 89 94 24 f8 00 00 00 48 8b 50 08 49 89 94 24 00 01 00 00 8b 40 10 41 89 84 24 08 01 00 00 49 8b 45 78 48 85 c0 74 0b <0f> b6 00 41 88 84 24 90 64 00 00 49 8b 45 60 48 85 c0 74 3b 48 8b [ +0.000010] RSP: 0018:ffffc203802f79a0 EFLAGS: 00010206 [ +0.000009] RAX: 0000000051d0f08b RBX: 0000000000000004 RCX: ffff9f964f0a8070 [ +0.000004] RDX: ffff9f9710f90e40 RSI: ffff9f96600c8000 RDI: ffff9f964f000000 [ +0.000004] RBP: ffffc203802f79f8 R08: 0000000000000000 R09: 0000000000000000 [ +0.000005] R10: 0000000000000000 R11: 0000000000000000 R12: ffff9f96600c8000 [ +0.000004] R13: ffff9f9710f90e40 R14: ffff9f964f000000 R15: ffff9f96600c8000 [ +0.000004] FS: 0000000000000000(0000) GS:ffff9f9970000000(0000) knlGS:0000000000000000 [ +0.000005] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ +0.000005] CR2: 0000000051d0f08b CR3: 00000002e6a20000 CR4: 0000000000350ef0 [ +0.000005] Call Trace: [ +0.000011] <TASK> [ +0.000010] ? __die_body.cold+0x19/0x27 [ +0.000012] ? page_fault_oops+0x15a/0x2d0 [ +0.000014] ? exc_page_fault+0x7e/0x180 [ +0.000009] ? asm_exc_page_fault+0x26/0x30 [ +0.000013] ? copy_stream_update_to_stream.isra.0+0x30d/0x750 [amdgpu] [ +0.000739] ? dc_commit_state_no_check+0xd6c/0xe70 [amdgpu] [ +0.000470] update_planes_and_stream_state+0x49b/0x4f0 [amdgpu] [ +0.000450] ? srso_return_thunk+0x5/0x5f [ +0.000009] ? commit_minimal_transition_state+0x239/0x3d0 [amdgpu] [ +0.000446] update_planes_and_stream_v2+0x24a/0x590 [amdgpu] [ +0.000464] ? srso_return_thunk+0x5/0x5f [ +0.000009] ? sort+0x31/0x50 [ +0.000007] ? amdgpu_dm_atomic_commit_tail+0x159f/0x3a30 [amdgpu] [ +0.000508] ? srso_return_thunk+0x5/0x5f [ +0.000009] ? amdgpu_crtc_get_scanout_position+0x28/0x40 [amdgpu] [ +0.000377] ? srso_return_thunk+0x5/0x5f [ +0.000009] ? drm_crtc_vblank_helper_get_vblank_timestamp_internal+0x160/0x390 [drm] [ +0.000058] ? srso_return_thunk+0x5/0x5f [ +0.000005] ? dma_fence_default_wait+0x8c/0x260 [ +0.000010] ? srso_return_thunk+0x5/0x5f [ +0.000005] ? wait_for_completion_timeout+0x13b/0x170 [ +0.000006] ? srso_return_thunk+0x5/0x5f [ +0.000005] ? dma_fence_wait_timeout+0x108/0x140 [ +0.000010] ? commit_tail+0x94/0x130 [drm_kms_helper] [ +0.000024] ? process_one_work+0x177/0x330 [ +0.000008] ? worker_thread+0x266/0x3a0 [ +0.000006] ? __pfx_worker_thread+0x10/0x10 [ +0.000004] ? kthread+0xd2/0x100 [ +0.000006] ? __pfx_kthread+0x10/0x10 [ +0.000006] ? ret_from_fork+0x34/0x50 [ +0.000004] ? __pfx_kthread+0x10/0x10 [ +0.000005] ? ret_from_fork_asm+0x1a/0x30 [ +0.000011] </TASK> Fixes: 1b04dcc ("drm/amd/display: Introduce overlay cursor mode") Suggested-by: Leo Li <[email protected]> Link: https://gitlab.freedesktop.org/drm/amd/-/issues/3693 Signed-off-by: Melissa Wen <[email protected]> Reviewed-by: Rodrigo Siqueira <[email protected]> Signed-off-by: Rodrigo Siqueira <[email protected]> Signed-off-by: Alex Deucher <[email protected]> (cherry picked from commit 1c86c81) Cc: [email protected]
Hi,
In previous versions of MPTCP one was allowed to install your own versions of Scheduler, Path Manager, Congestion Control, etc. using sysctl.
I cannot find such a feature in the new (build-in-kernel 5.15 - ubuntu 2022.04) version of mptcp. Neither sysctl nor 'ip mptcp' has such options. Would you like to advise me on how to modify the default mptcp behaviour (i.e., change the Path Manager, Scheduler, and congestion controller)?
The text was updated successfully, but these errors were encountered: