-
Notifications
You must be signed in to change notification settings - Fork 403
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iox #25 chunk receiver building block #84
Changes from 12 commits
052f86e
5a9fdc7
33548dd
a943543
dd4d8a6
0dd4f60
dfbfa43
1e48520
db55440
41acea4
0d51b36
472aa97
ff74def
f8b98da
2b05e87
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,17 +12,19 @@ | |
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
#pragma once | ||
#ifndef IOX_POPO_CHUNK_DISTRIBUTOR_HPP_ | ||
#define IOX_POPO_CHUNK_DISTRIBUTOR_HPP_ | ||
|
||
#include "iceoryx_posh/internal/mepoo/shared_chunk.hpp" | ||
#include "iceoryx_posh/internal/popo/building_blocks/chunk_distributor_data.hpp" | ||
#include "iceoryx_posh/internal/popo/building_blocks/chunk_queue_pusher.hpp" | ||
|
||
namespace iox | ||
{ | ||
namespace popo | ||
{ | ||
/// @brief The ChunkDistributor is the low layer building block to send SharedChunks to a dynamic number of ChunkQueus. | ||
/// Together with the ChunkQueue, the ChunkDistributor builds the infrastructure to exchange memory chunks between | ||
/// Together with the ChunkQueuePusher, the ChunkDistributor builds the infrastructure to exchange memory chunks between | ||
/// different data producers and consumers that could be located in different processes. Besides a modifiable container | ||
/// of ChunkQueues to which a SharedChunk can be deliverd, it holds a configurable history of last sent chunks. This | ||
/// allows to provide a newly added queue a number of last chunks to start from. This is needed for functionality | ||
|
@@ -31,22 +33,26 @@ namespace popo | |
/// | ||
/// About Concurrency: | ||
/// This ChunkDistributor can be used with different LockingPolicies for different scenarios | ||
/// When different threads operate on it (e.g. application sends chunks and RouDi adds and removes queues), | ||
/// a locking policy must be used that ensures consistent data in the ChunkDistributorData. | ||
Comment on lines
-34
to
-35
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Keep fighting the trailing whitespaces :-) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. doesn't your IDE automatically remove trailing whitespaces ... me starting IDE war and running away ;) |
||
/// When different threads operate on it (e.g. application sends chunks and RouDi adds and removes queues), | ||
/// a locking policy must be used that ensures consistent data in the ChunkDistributorData. | ||
/// @todo There are currently some challenge: | ||
/// For the stored queues and the history, containers are used which are not thread safe. Therefore we use an inter-process | ||
/// mutex. But this can lead to deadlocks if a user process gets terminated while one of its threads is in the ChunkDistributor | ||
/// and holds a lock. An easier setup would be if changing the queues by a middleware thread and sending chunks by the user process | ||
/// would not interleave. I.e. there is no concurrent access to the containers. Then a memory synchronization would be sufficient | ||
/// The cleanup() call is the biggest challenge. This is used to free chunks that are still held by a not properly terminated | ||
/// user application. Even if access from middleware and user threads do not overlap, the history container to cleanup could be | ||
/// in an inconsistent state as the application was hard terminated while changing it. We would need a container like the | ||
/// UsedChunkList to have one that is robust against such inconsistencies.... A perfect job for our future selves | ||
template <uint32_t MaxQueues, typename LockingPolicy> | ||
/// For the stored queues and the history, containers are used which are not thread safe. Therefore we use an | ||
/// inter-process mutex. But this can lead to deadlocks if a user process gets terminated while one of its | ||
/// threads is in the ChunkDistributor and holds a lock. An easier setup would be if changing the queues | ||
/// by a middleware thread and sending chunks by the user process would not interleave. I.e. there is no concurrent | ||
/// access to the containers. Then a memory synchronization would be sufficient. | ||
/// The cleanup() call is the biggest challenge. This is used to free chunks that are still held by a not properly | ||
/// terminated user application. Even if access from middleware and user threads do not overlap, the history | ||
/// container to cleanup could be in an inconsistent state as the application was hard terminated while changing it. | ||
/// We would need a container like the UsedChunkList to have one that is robust against such inconsistencies.... | ||
/// A perfect job for our future selves | ||
template <typename ChunkDistributorDataType> | ||
class ChunkDistributor | ||
{ | ||
public: | ||
using MemberType_t = ChunkDistributorData<MaxQueues, LockingPolicy>; | ||
using MemberType_t = ChunkDistributorDataType; | ||
using ChunkQueueData_t = typename ChunkDistributorDataType::ChunkQueueData_t; | ||
using ChunkQueuePusher_t = typename ChunkDistributorDataType::ChunkQueuePusher_t; | ||
|
||
ChunkDistributor(MemberType_t* const chunkDistrubutorDataPtr) noexcept; | ||
|
||
|
@@ -62,11 +68,11 @@ class ChunkDistributor | |
/// @param[in] requestedHistory number of last chunks from history to send if available. If history size is smaller | ||
/// then the available history size chunks are provided | ||
/// @return true on success otherwise false | ||
bool addQueue(ChunkQueue::MemberType_t* const queueToAdd, uint64_t requestedHistory = 0) noexcept; | ||
bool addQueue(ChunkQueueData_t* const queueToAdd, uint64_t requestedHistory = 0) noexcept; | ||
|
||
/// @brief Remove a queue from the internal list of chunk queues | ||
/// @param[in] chunk queue to remove from the list | ||
void removeQueue(ChunkQueue::MemberType_t* const queueToRemove) noexcept; | ||
void removeQueue(ChunkQueueData_t* const queueToRemove) noexcept; | ||
|
||
/// @brief Delete all the stored chunk queues | ||
void removeAllQueues() noexcept; | ||
|
@@ -84,7 +90,7 @@ class ChunkDistributor | |
/// history | ||
/// @param[in] chunk queue to which this chunk shall be delivered | ||
/// @param[in] shared chunk to be delivered | ||
void deliverToQueue(ChunkQueue::MemberType_t* const queue, mepoo::SharedChunk chunk) noexcept; | ||
void deliverToQueue(ChunkQueueData_t* const queue, mepoo::SharedChunk chunk) noexcept; | ||
|
||
/// @brief Update the chunk history but do not deliver the chunk to any chunk queue. E.g. use case is to to update a | ||
/// non offered field in ara | ||
|
@@ -117,3 +123,5 @@ class ChunkDistributor | |
} // namespace iox | ||
|
||
#include "iceoryx_posh/internal/popo/building_blocks/chunk_distributor.inl" | ||
|
||
#endif |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,12 +12,13 @@ | |
// See the License for the specific language governing permissions and | ||
// limitations under the License. | ||
|
||
#pragma once | ||
#ifndef IOX_POPO_CHUNK_DISTRIBUTOR_DATA_HPP_ | ||
#define IOX_POPO_CHUNK_DISTRIBUTOR_DATA_HPP_ | ||
|
||
#include "iceoryx_posh/iceoryx_posh_types.hpp" | ||
#include "iceoryx_posh/internal/log/posh_logging.hpp" | ||
#include "iceoryx_posh/internal/mepoo/shared_chunk.hpp" | ||
#include "iceoryx_posh/internal/popo/building_blocks/chunk_queue.hpp" | ||
#include "iceoryx_posh/internal/popo/building_blocks/chunk_queue_pusher.hpp" | ||
#include "iceoryx_utils/cxx/algorithm.hpp" | ||
#include "iceoryx_utils/cxx/vector.hpp" | ||
#include "iceoryx_utils/internal/posix_wrapper/mutex.hpp" | ||
|
@@ -29,8 +30,6 @@ namespace iox | |
{ | ||
namespace popo | ||
{ | ||
struct ChunkQueueData; | ||
|
||
Comment on lines
-32
to
-33
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Nice catch! |
||
class ThreadSafePolicy | ||
{ | ||
public: // needs to be public since we want to use std::lock_guard | ||
|
@@ -66,33 +65,37 @@ class SingleThreadedPolicy | |
} | ||
}; | ||
|
||
template <uint32_t MaxQueues, typename LockingPolicy> | ||
template <uint32_t MaxQueues, typename LockingPolicy, typename ChunkQueuePusherType = ChunkQueuePusher> | ||
struct ChunkDistributorData : public LockingPolicy | ||
{ | ||
using lockGuard_t = std::lock_guard<ChunkDistributorData<MaxQueues, LockingPolicy>>; | ||
using LockGuard_t = std::lock_guard<ChunkDistributorData<MaxQueues, LockingPolicy, ChunkQueuePusherType>>; | ||
using ChunkQueuePusher_t = ChunkQueuePusherType; | ||
using ChunkQueueData_t = typename ChunkQueuePusherType::MemberType_t; | ||
|
||
ChunkDistributorData(uint64_t historyCapacity = 0u) noexcept | ||
: m_historyCapacity(algorithm::min(historyCapacity, MAX_SENDER_SAMPLE_HISTORY_CAPACITY)) | ||
: m_historyCapacity(algorithm::min(historyCapacity, MAX_HISTORY_CAPACITY_OF_CHUNK_DISTRIBUTOR)) | ||
{ | ||
if (m_historyCapacity != historyCapacity) | ||
{ | ||
LogWarn() << "Chunk history too large, reducing from " << historyCapacity << " to " | ||
<< MAX_SENDER_SAMPLE_HISTORY_CAPACITY; | ||
<< MAX_HISTORY_CAPACITY_OF_CHUNK_DISTRIBUTOR; | ||
} | ||
} | ||
|
||
const uint64_t m_historyCapacity; | ||
|
||
using QueueContainer_t = cxx::vector<ChunkQueue::MemberType_t*, MaxQueues>; | ||
using QueueContainer_t = cxx::vector<ChunkQueueData_t*, MaxQueues>; | ||
QueueContainer_t m_queues; | ||
|
||
/// @todo using ChunkManagement instead of SharedChunk as in UsedChunkList? | ||
/// When to store a SharedChunk and when the included ChunkManagement must be used? | ||
/// If we would make the ChunkDistributor lock-free, can we than extend the UsedChunkList to | ||
/// be like a ring buffer and use this for the history? This would be needed to be able to safely cleanup | ||
using SampleHistoryContainer_t = cxx::vector<mepoo::SharedChunk, MAX_SENDER_SAMPLE_HISTORY_CAPACITY>; | ||
SampleHistoryContainer_t m_sampleHistory; | ||
using HistoryContainer_t = cxx::vector<mepoo::SharedChunk, MAX_HISTORY_CAPACITY_OF_CHUNK_DISTRIBUTOR>; | ||
HistoryContainer_t m_history; | ||
}; | ||
|
||
} // namespace popo | ||
} // namespace iox | ||
|
||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to have two interfaces!