diff --git a/docs/03-exports/aliases-api.include b/docs/03-exports/aliases-api.include index a185ab0fc..e0132012c 100644 --- a/docs/03-exports/aliases-api.include +++ b/docs/03-exports/aliases-api.include @@ -885,6 +885,7 @@ .. |EntityId_t-api| replace:: :cpp:struct:`EntityId_t` .. |InitialAnnouncementConfig-api| replace:: :cpp:struct:`InitialAnnouncementConfig` .. |InitialAnnouncementConfig::period-api| replace:: :cpp:member:`period` +.. |InitialAnnouncementConfig::count-api| replace:: :cpp:member:`count` .. |BuiltinAttributes-api| replace:: :cpp:class:`BuiltinAttributes` .. |BuiltinAttributes::discovery_config-api| replace:: :cpp:member:`discovery_config` .. |BuiltinAttributes::metatrafficUnicastLocatorList-api| replace:: :cpp:member:`metatrafficUnicastLocatorList` diff --git a/docs/fastdds/faq/dds_layer/dds_layer.rst b/docs/fastdds/faq/dds_layer/dds_layer.rst index 0017da51d..818e1e9e0 100644 --- a/docs/fastdds/faq/dds_layer/dds_layer.rst +++ b/docs/fastdds/faq/dds_layer/dds_layer.rst @@ -10,6 +10,9 @@ DDS LAYER Frequently Asked Questions CORE ---- +Entity +^^^^^^ + .. collapse:: What is the significance of a unique ID in the context of DDS and RTPS entities, and why is it important to have a shared ID between these entities? @@ -54,69 +57,83 @@ CORE ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How do QoS policies influence the behavior of DDS entities, and what are the potential impacts of misconfiguring these policies on system performance and reliability? +.. collapse:: What are the advantages and potential drawbacks of using listeners for asynchronous notifications in a DDS system, and how can these drawbacks be mitigated? |br| - QoS policies determine the operational parameters of DDS entities, such as latency, reliability, and resource usage. Misconfigured QoS policies can lead to suboptimal performance, such as increased latency, dropped messages, or excessive resource consumption, which can negatively affect the overall system reliability and efficiency. For further information go to :ref:`dds_layer_core_policy`. + Listeners provide real-time notifications of status changes, improving responsiveness and allowing for event-driven programming. However, drawbacks include the potential for increased complexity and resource contention. Mitigation strategies involve keeping listener functions simple and offloading heavy processing to other parts of the system. For further information go to :ref:`dds_layer_core_entity`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: In what scenarios might it be necessary to modify the QoS policies of an entity after its creation, and what are the best practices for doing so using the ``set_qos()`` function? +.. collapse:: How does the inheritance of listener interfaces across different entity types enhance the flexibility and modularity of the system? |br| - Scenarios necessitating QoS modification post-creation include changes in network conditions, evolving application requirements, or the need to optimize performance. Best practices include using the ``set_qos()`` function judiciously, validating the new policies before applying them, and monitoring the system for any adverse effects after changes. For further information go to :ref:`dds_layer_domainParticipant`. + The inheritance of listener interfaces enhances flexibility by allowing different entity types to share common callback mechanisms while enabling customization for specific types. This modularity simplifies code management and fosters reuse. For further information go to :ref:`dds_layer_core_entity`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: What are the advantages and potential drawbacks of using listeners for asynchronous notifications in a DDS system, and how can these drawbacks be mitigated? +.. collapse:: How does the limitation of operations on disabled entities influence the design and implementation of a DDS-based system? |br| - Listeners provide real-time notifications of status changes, improving responsiveness and allowing for event-driven programming. However, drawbacks include the potential for increased complexity and resource contention. Mitigation strategies involve keeping listener functions simple and offloading heavy processing to other parts of the system. For further information go to :ref:`dds_layer_core_entity`. + Disabled entities can only perform basic operations such as QoS and listener management, status querying, and subentity creation/deletion. This restriction ensures that incomplete or improperly configured entities do not adversely impact the system. For further information go to :ref:`dds_layer_core_entity`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How does the inheritance of listener interfaces across different entity types enhance the flexibility and modularity of the system? +.. collapse:: What potential issues could arise from creating or deleting entities within a listener callback, and why is it recommended to avoid such actions? |br| - The inheritance of listener interfaces enhances flexibility by allowing different entity types to share common callback mechanisms while enabling customization for specific types. This modularity simplifies code management and fosters reuse. For further information go to :ref:`dds_layer_core_entity`. + Creating or deleting entities within listener callbacks can cause race conditions, deadlocks, or undefined behavior due to concurrent access. It is recommended to use listeners solely for event notification and delegate entity management to higher-level components outside the callback scope. For further information go to :ref:`dds_layer_core_entity_commonchars_listener`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: What role do status objects play in the communication lifecycle of DDS entities, and how do they interact with listener callbacks to notify applications of status changes? +.. collapse:: In what ways do these custom callbacks differ from standard DDS callbacks, and what additional capabilities do they provide to the application developers? |br| - Status objects track the communication state of entities, triggering listener callbacks when changes occur. This mechanism ensures that applications are promptly informed of relevant status updates, facilitating timely responses to communication events. For further information go to :ref:`dds_layer_core_status`. + Unlike standard DDS callbacks, Fast DDS custom callbacks are always enabled and offer functionality tailored to the Fast DDS implementation. They provide more granular control over participant and data discovery processes, enhancing the application's ability to react to dynamic changes. For further information go to :ref:`dds_layer_core_entity`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How does the concept of StatusCondition link entities to Wait-sets, and what benefits does this linkage provide in terms of system synchronization and event handling? +Policy +^^^^^^ + +.. collapse:: How do QoS policies influence the behavior of DDS entities, and what are the potential impacts of misconfiguring these policies on system performance and reliability? |br| - StatusCondition provides a means to monitor multiple status changes efficiently by linking entities to Wait-sets. This linkage allows for consolidated event handling, reducing polling overhead and improving synchronization within the system. For further information go to :ref:`dds_layer_core_status`. + QoS policies determine the operational parameters of DDS entities, such as latency, reliability, and resource usage. Misconfigured QoS policies can lead to suboptimal performance, such as increased latency, dropped messages, or excessive resource consumption, which can negatively affect the overall system reliability and efficiency. For further information go to :ref:`dds_layer_core_policy`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +.. collapse:: In what scenarios might it be necessary to modify the QoS policies of an entity after its creation, and what are the best practices for doing so using the ``set_qos()`` function? + + + + + |br| + + Scenarios necessitating QoS modification post-creation include changes in network conditions, evolving application requirements, or the need to optimize performance. Best practices include using the ``set_qos()`` function judiciously, validating the new policies before applying them, and monitoring the system for any adverse effects after changes. For further information go to :ref:`dds_layer_domainParticipant`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -131,152 +148,167 @@ CORE ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How does the limitation of operations on disabled entities influence the design and implementation of a DDS-based system? +.. collapse:: How do the specific callbacks provided by Fast DDS, such as on_participant_discovery and on_data_writer_discovery, enhance the functionality of the DDS system? |br| - Disabled entities can only perform basic operations such as QoS and listener management, status querying, and subentity creation/deletion. This restriction ensures that incomplete or improperly configured entities do not adversely impact the system. For further information go to :ref:`dds_layer_core_entity`. + Fast DDS-specific callbacks, such as ``on_participant_discovery`` and ``on_data_writer_discovery``, provide additional hooks for monitoring and responding to specific events within the DDS framework. These callbacks offer greater control and insight into the system's operational state. For further information go to :ref:`dds_layer_core_entity`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How does the use of StatusMask in enabling or disabling specific callbacks affect the responsiveness and behavior of a DDS system? +.. collapse:: How does the DeadlineQoS policy apply differently to topics with keys compared to those without keys, and what are the practical considerations for using keys in such scenarios? |br| - The |StatusMask-api| allows selective enabling or disabling of specific callbacks, fine-tuning the system's responsiveness and avoiding unnecessary processing. Proper management of |StatusMask-api| settings ensures that only relevant events trigger callbacks, optimizing system behavior. For further information go to :ref:`dds_layer_core_status`. + For topics with keys, the |DeadlineQosPolicy-api| is applied individually to each key. This means that each unique key (e.g., each vehicle in a fleet) must meet its deadline. The practical consideration is that the publisher must manage deadlines for multiple keys simultaneously, which can be complex but allows for more granular control over data timeliness. For further information go to :ref:`standard`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: What potential issues could arise from creating or deleting entities within a listener callback, and why is it recommended to avoid such actions? +.. collapse:: Why is it crucial for the offered deadline period on DataWriters to be less than or equal to the requested deadline period on DataReaders, and what could be the consequences of mismatched periods? |br| - Creating or deleting entities within listener callbacks can cause race conditions, deadlocks, or undefined behavior due to concurrent access. It is recommended to use listeners solely for event notification and delegate entity management to higher-level components outside the callback scope. For further information go to :ref:`dds_layer_core_entity_commonchars_listener`. + The requirement for the offered deadline period on DataWriters to be less than or equal to the requested deadline period on DataReaders ensures that the DataWriter can meet the DataReader's expectations. Mismatched periods could result in the DataReader perceiving missed deadlines, leading to potential data loss and reliability issues. For further information go to :ref:`deadline_compatibilityrule`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How do the specific callbacks provided by Fast DDS, such as on_participant_discovery and on_data_writer_discovery, enhance the functionality of the DDS system? +.. collapse:: Why is it crucial for the offered deadline period on DataWriters to be less than or equal to the requested deadline period on DataReaders, and what could be the consequences of mismatched periods? |br| - Fast DDS-specific callbacks, such as ``on_participant_discovery `` and ``on_data_writer_discovery``, provide additional hooks for monitoring and responding to specific events within the DDS framework. These callbacks offer greater control and insight into the system's operational state. For further information go to :ref:`dds_layer_core_entity`. + The requirement for the offered deadline period on DataWriters to be less than or equal to the requested deadline period on DataReaders ensures that the DataWriter can meet the DataReader's expectations. Mismatched periods could result in the DataReader perceiving missed deadlines, leading to potential data loss and reliability issues. For further information go to :ref:`deadline_compatibilityrule`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: In what ways do these custom callbacks differ from standard DDS callbacks, and what additional capabilities do they provide to the application developers? +.. collapse:: How should the DeadlineQosPolicy be configured in conjunction with the TimeBasedFilterQosPolicy to ensure consistency and avoid data loss or delays? |br| - Unlike standard DDS callbacks, Fast DDS custom callbacks are always enabled and offer functionality tailored to the Fast DDS implementation. They provide more granular control over participant and data discovery processes, enhancing the application's ability to react to dynamic changes. For further information go to :ref:`dds_layer_core_entity`. + To ensure consistency, the |DeadlineQosPolicy-api| period must be greater or equal to the minimum separation specified in the |TimeBasedFilterQosPolicy-api|. This prevents the system from attempting to enforce a stricter deadline than the filter allows, avoiding unnecessary alarms and ensuring smooth data flow. For further information go to :ref:`timebasedfilterqospolicy`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How does the DeadlineQoS policy apply differently to topics with keys compared to those without keys, and what are the practical considerations for using keys in such scenarios? +.. collapse:: How does the default value of c_TimeInfinite for the period in DeadlineQoS affect the behavior of DataWriters and DataReaders, and under what circumstances might this default value be modified? |br| - For topics with keys, the |DeadlineQosPolicy-api| is applied individually to each key. This means that each unique key (e.g., each vehicle in a fleet) must meet its deadline. The practical consideration is that the publisher must manage deadlines for multiple keys simultaneously, which can be complex but allows for more granular control over data timeliness. For further information go to :ref:`standard`. + The default value of ``c_TimeInfinite`` means that there is no deadline, so DataWriters and DataReaders are not constrained by time. This is useful for applications where timeliness is not critical. However, for time-sensitive applications, this default should be changed to a specific duration to ensure timely data updates. For further information go to :ref:`deadlineqospolicy`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: Why is it crucial for the offered deadline period on DataWriters to be less than or equal to the requested deadline period on DataReaders, and what could be the consequences of mismatched periods? +.. collapse:: What are the key differences between BY_RECEPTION_TIMESTAMP and BY_SOURCE_TIMESTAMP in DestinationOrderQoS, and how do these settings impact the consistency and order of received data? |br| - The requirement for the offered deadline period on DataWriters to be less than or equal to the requested deadline period on DataReaders ensures that the DataWriter can meet the DataReader's expectations. Mismatched periods could result in the DataReader perceiving missed deadlines, leading to potential data loss and reliability issues. For further information go to :ref:`deadline_compatibilityrule`. + ``BY_RECEPTION_TIMESTAMP`` orders data based on when it is received, which can lead to different DataReaders having different final values due to network delays. ``BY_SOURCE_TIMESTAMP`` ensures consistency across all DataReaders by using the send time from the DataWriter. ``BY_SOURCE_TIMESTAMP`` is preferred for ensuring consistent data states across multiple DataReaders. Both can be configured by using |DestinationOrderQosPolicyKind-api|. For further information go to :ref:`destinationorderqospolicy`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How should the DeadlineQosPolicy be configured in conjunction with the TimeBasedFilterQosPolicy to ensure consistency and avoid data loss or delays? +.. collapse:: In what scenarios might the BY_RECEPTION_TIMESTAMP_DESTINATIONORDER_QOS be preferred over BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS, and vice versa? |br| - To ensure consistency, the |DeadlineQosPolicy-api| period must be greater or equal to the minimum separation specified in the |TimeBasedFilterQosPolicy-api|. This prevents the system from attempting to enforce a stricter deadline than the filter allows, avoiding unnecessary alarms and ensuring smooth data flow. For further information go to :ref:`timebasedfilterqospolicy`. + ``BY_RECEPTION_TIMESTAMP`` might be preferred in scenarios where the most recent data is always the most relevant, regardless of source time (e.g., real-time sensor data). ``BY_SOURCE_TIMESTAMP`` is ideal for applications requiring consistency, such as financial transactions or coordinated control systems. For further information go to :ref:`destinationorderqospolicykind`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How does the default value of c_TimeInfinite for the period in DeadlineQoS affect the behavior of DataWriters and DataReaders, and under what circumstances might this default value be modified? +.. collapse:: What are the potential challenges and solutions when DataWriters and DataReaders have incompatible DestinationOrderQoS kinds, and how does the compatibility rule ensure proper data ordering? |br| - The default value of ``c_TimeInfinite`` means that there is no deadline, so DataWriters and DataReaders are not constrained by time. This is useful for applications where timeliness is not critical. However, for time-sensitive applications, this default should be changed to a specific duration to ensure timely data updates. For further information go to :ref:`deadlineqospolicy`. + Incompatible kinds can lead to data being ignored or reordered incorrectly, causing inconsistencies. The compatibility rule ensures that the DataReader can handle the ordering provided by the DataWriter. Solutions include aligning QoS settings across DataWriters and DataReaders and using appropriate fallback mechanisms. For further information go to :ref:`destinationorder_compatibilityrule`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: What are the key differences between BY_RECEPTION_TIMESTAMP and BY_SOURCE_TIMESTAMP in DestinationOrderQoS, and how do these settings impact the consistency and order of received data? +.. collapse:: How might the DestinationOrderQoS policy be applied in a multi-DataWriter scenario to ensure data consistency, and what are the potential pitfalls to avoid? |br| - ``BY_RECEPTION_TIMESTAMP`` orders data based on when it is received, which can lead to different DataReaders having different final values due to network delays. ``BY_SOURCE_TIMESTAMP`` ensures consistency across all DataReaders by using the send time from the DataWriter. ``BY_SOURCE_TIMESTAMP`` is preferred for ensuring consistent data states across multiple DataReaders. Both can be configured by using |DestinationOrderQosPolicyKind-api|. For further information go to :ref:`destinationorderqospolicy`. + In scenarios with multiple DataWriters, such as collaborative robotics or distributed simulations, the |DestinationOrderQosPolicy-api| ensures that data from different writers is correctly ordered. Avoiding pitfalls like network-induced delays involves carefully configuring timestamps and ensuring synchronized clocks across systems. For further information go to :ref:`destinationorderqospolicy`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: In what scenarios might the BY_RECEPTION_TIMESTAMP_DESTINATIONORDER_QOS be preferred over BY_SOURCE_TIMESTAMP_DESTINATIONORDER_QOS, and vice versa? +Status +^^^^^^ + +.. collapse:: What role do status objects play in the communication lifecycle of DDS entities, and how do they interact with listener callbacks to notify applications of status changes? |br| - ``BY_RECEPTION_TIMESTAMP`` might be preferred in scenarios where the most recent data is always the most relevant, regardless of source time (e.g., real-time sensor data). ``BY_SOURCE_TIMESTAMP`` is ideal for applications requiring consistency, such as financial transactions or coordinated control systems. For further information go to :ref:`destinationorderqospolicykind`. + Status objects track the communication state of entities, triggering listener callbacks when changes occur. This mechanism ensures that applications are promptly informed of relevant status updates, facilitating timely responses to communication events. For further information go to :ref:`dds_layer_core_status`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: What are the potential challenges and solutions when DataWriters and DataReaders have incompatible DestinationOrderQoS kinds, and how does the compatibility rule ensure proper data ordering? +.. collapse:: How does the concept of StatusCondition link entities to Wait-sets, and what benefits does this linkage provide in terms of system synchronization and event handling? |br| - Incompatible kinds can lead to data being ignored or reordered incorrectly, causing inconsistencies. The compatibility rule ensures that the DataReader can handle the ordering provided by the DataWriter. Solutions include aligning QoS settings across DataWriters and DataReaders and using appropriate fallback mechanisms. For further information go to :ref:`destinationorder_compatibilityrule`. + StatusCondition provides a means to monitor multiple status changes efficiently by linking entities to Wait-sets. This linkage allows for consolidated event handling, reducing polling overhead and improving synchronization within the system. For further information go to :ref:`dds_layer_core_status`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: How might the DestinationOrderQoS policy be applied in a multi-DataWriter scenario to ensure data consistency, and what are the potential pitfalls to avoid? + + +.. collapse:: How does the use of StatusMask in enabling or disabling specific callbacks affect the responsiveness and behavior of a DDS system? |br| - In scenarios with multiple DataWriters, such as collaborative robotics or distributed simulations, the |DestinationOrderQosPolicy-api| ensures that data from different writers is correctly ordered. Avoiding pitfalls like network-induced delays involves carefully configuring timestamps and ensuring synchronized clocks across systems. For further information go to :ref:`destinationorderqospolicy`. + The |StatusMask-api| allows selective enabling or disabling of specific callbacks, fine-tuning the system's responsiveness and avoiding unnecessary processing. Proper management of |StatusMask-api| settings ensures that only relevant events trigger callbacks, optimizing system behavior. For further information go to :ref:`dds_layer_core_status`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- DOMAIN ------ +.. collapse:: Which are the mandatory arguments to create a DomainParticipant? + + + + |br| + + The mandatory arguments to create a |DomainParticipant-api| are the ``DomainId`` that identifies the domain where the DomainParticipant will be created and the |DomainParticipantQos-api| describing the behavior of the |DomainParticipant-api|. If the provided value is ``TOPIC_QOS_DEFAULT``, the value of the DomainParticipantQos is used. For further information go to :ref:`dds_layer_domainParticipant_creation`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + .. collapse:: What is the purpose of providing a "DomainId" when creating a DomainParticipant? @@ -284,7 +316,7 @@ DOMAIN |br| - The ``DomainId`` identifies the domain where the |DomainParticipant-api| will be created. Do not use ``DomainId`` higher than 200. For further information go to :ref:`dds_layer_domain`. + The ``DomainId`` identifies the domain where the |DomainParticipant-api| will be created. Do not use ``DomainId`` higher than 200. Once created the |DomainParticipant-api|, its ``DomainId`` can not be changed. For further information go to :ref:`dds_layer_domain`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -295,7 +327,7 @@ DOMAIN |br| - A Listener derived from |DomainParticipantListener-api|, implementing the callbacks that will be triggered in response to events and state changes on the DomainParticipant. By default, empty callbacks are used. For further information go to :ref:`dds_layer_domainParticipantListener`. + A Listener derived from |DomainParticipantListener-api|, implementing the callbacks that will be triggered in response to events and state changes on the DomainParticipant. By default, empty callbacks are used. For further information about |DomainParticipantListener-api| and how to implement its callbacks go to :ref:`dds_layer_domainParticipantListener`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -445,6 +477,17 @@ PUBLISHER ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +.. collapse:: What is an efficient way to write large data types in DDS using a DataWriter to minimize memory usage and processing time? + + + + |br| + + + An efficient way to write large data types in DDS is to make the |DataWriter-api| loan a sample from its memory to the user, and the user to fill this sample with the required values. When write() is called with such a loaned sample, the DataWriter does not copy its contents, as it already owns the buffer. For further information go to :ref:`dds_layer_publisher_write_loans`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + .. collapse:: What happens to the contents of a loaned data sample after "write()" has been successfully called with that sample? @@ -527,14 +570,14 @@ SUBSCRIBER ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -.. collapse:: What is the meaning of the sequences returned by the "DataReader::read()" and "DataReader::take()" operations? +.. collapse:: How do the "DataReader::read()" and "DataReader::take()" operations (and their variants) return information to the application? |br| - Received DDS data samples in a sequence of the data type and corresponding information about each DDS sample in a SampleInfo sequence. For further information go to :ref:`dds_layer_subscriber_accessreceived_loans`. + These two operations return information in two sequences: the received DDS data samples are returned in a sequence of the data type and the corresponding information about each DDS sample is returned in a SampleInfo sequence. For further information go to :ref:`dds_layer_subscriber_accessreceived_loans`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -605,6 +648,27 @@ TOPIC A |Topic-api| is a specialization of the broader concept of TopicDescription. A |Topic-api| represents a single data flow between Publisher and Subscriber. For further information go to :ref:`dds_layer_topic`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +.. collapse:: What is the difference between Topics, Keys and Instances? + + + + |br| + + Topics define the structure of the data being exchanged and are linked to a specific data type. Keys are fields within the data type that uniquely identify different instances of the same Topic. An Instance refers to a specific set of data distinguished by its key values within a Topic, allowing multiple sets of related data (instances) to exist under a single Topic, with updates being directed to the correct instance based on the key. For further information go to :ref:`dds_layer_topic_instances`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +.. collapse:: How is a Topic created? + + + + |br| + + Creation of a Topic is done with the |DomainParticipant::create_topic-api| member function on the |DomainParticipant-api| instance, that acts as a factory for the Topic. Mandatory arguments for creating a Topic a string with the name that identifies the topic, the name of the registered data type that will be transmitted and the |TopicQos-api| that describes the behavior of the Topic. For further information go to :ref:`dds_layer_topic_creation`. + ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- .. collapse:: What type of values can the "like" operator be used with in the context of content Filtered Topic? diff --git a/docs/fastdds/faq/discovery/discovery.rst b/docs/fastdds/faq/discovery/discovery.rst index 24c7aee83..1080e660e 100644 --- a/docs/fastdds/faq/discovery/discovery.rst +++ b/docs/fastdds/faq/discovery/discovery.rst @@ -31,6 +31,15 @@ Discovery Frequently Asked Questions ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +.. collapse:: How can you improve the chances of successful participant discovery when using the SIMPLE discovery protocol, and what role do initial announcements play in this process? + + + |br| + + To improve the chances of successful participant discovery when using the SIMPLE discovery protocol, you can configure initial announcements to send multiple discovery messages at short intervals. This increases the likelihood that DomainParticipants will detect each other despite potential network disruptions or message loss. By adjusting the |InitialAnnouncementConfig::count-api| (number of announcements) and |InitialAnnouncementConfig::period-api| (interval between announcements), you can optimize discovery reliability during startup. For further information, go to :ref:`simple_disc_settings`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + .. collapse:: What is an initial peer list? @@ -71,7 +80,7 @@ Discovery Frequently Asked Questions |br| - The role of the server is to redistribute its clients' discovery information to its known clients and servers. For further information, go to :ref:`discovery_server`. + The primary function of a Discovery Server in the DDS architecture is to centralize and redistribute discovery information among DomainParticipants, ensuring efficient communication between clients and servers. The server collects discovery data from clients (and other servers) and redistributes it to relevant participants, running a "matching" algorithm to provide only the necessary information for DataWriters and DataReaders to establish communication. It also facilitates server-to-server connections, enabling a more scalable discovery process across the network. For further information, go to :ref:`discovery_server`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- diff --git a/docs/fastdds/faq/faq.rst b/docs/fastdds/faq/faq.rst index d4a12d874..ecdd9f9bc 100644 --- a/docs/fastdds/faq/faq.rst +++ b/docs/fastdds/faq/faq.rst @@ -13,8 +13,8 @@ This section answers to frequently asked questions about FastDDS. getting_started/getting_started dds_layer/dds_layer rtps_layer/rtps_layer - transport_layer/transport_layer discovery/discovery + transport_layer/transport_layer persistence_service/persistence_service security/security logging/logging diff --git a/docs/fastdds/faq/getting_started/getting_started.rst b/docs/fastdds/faq/getting_started/getting_started.rst index 05e2cdf95..931c0eb9a 100644 --- a/docs/fastdds/faq/getting_started/getting_started.rst +++ b/docs/fastdds/faq/getting_started/getting_started.rst @@ -26,7 +26,7 @@ Frequently Asked Getting Started Questions |br| - There are four basic entities: |Publisher|, |Subscriber|, |Topic|, |domain|. For further information, go to :ref:`what_is_dds`. + These are the basic entities of a DDS: |Publisher|, |Subscriber|, |DataReader|, |DataWriter|, |Topic|, |domain|. For further information, go to :ref:`what_is_dds`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -48,7 +48,15 @@ Frequently Asked Getting Started Questions |br| - |Publisher-api|. It is the DCPS entity in charge of the creation and configuration of the DataWriters it implements. The |DataWriter-api| is the entity in charge of the actual publication of the messages. Each one will have an assigned Topic under which the messages are published. For further information, go to :ref:`dds_layer_publisher`. + |Publisher-api|. It is the Data-Centric Publish Subscribe entity in charge of the creation and configuration of the DataWriters it implements. The |DataWriter-api| is the entity in charge of the actual publication of the messages. Each one will have an assigned Topic under which the messages are published. For further information, go to :ref:`dds_layer_publisher`. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +.. collapse:: What is the primary responsibility of a DDS Subscriber entity? + + |br| + + |Subscriber-api|. It is the DCPS Entity in charge of receiving the data published under the topics to which it subscribes. It serves one or more DataReader objects, which are responsible for communicating the availability of new data to the application. See :ref:`dds_layer_subscriber` for further details. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -59,7 +67,7 @@ Frequently Asked Getting Started Questions |br| - Domain. This is the concept used to link all publishers and subscribers, belonging to one or more applications, which exchange data under different topics. These individual applications that participate in a domain are called DomainParticipant. The DDS Domain is identified by a domain ID. For further information, go to :ref:`dds_layer_domain`. + |Domain|. This is the concept used to link all publishers and subscribers, belonging to one or more applications, which exchange data under different topics. These individual applications that participate in a domain are called |DomainParticipant-api|. The DDS Domain is identified by a domain ID. Domains create logical separations among the entities that share a common communication infrastructure. They isolate applications running in the same domain from applications running on different domains. For further information, go to :ref:`dds_layer_domain`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -70,7 +78,7 @@ Frequently Asked Getting Started Questions |br| - The Real-Time Publish Subscribe (RTPS) protocol, developed to support DDS applications, is a publication-subscription communication middleware over transports such as UDP/IP. For further information, go to :ref:`what_is_rtps`. + The Real-Time Publish Subscribe (RTPS) protocol, developed to support DDS applications, is a publication-subscription communication middleware over transports such as UDP/IP. RTPS supports unicast and multicast communications, organizing communication within independent domains containing RTPSParticipants, which use RTPSWriters to send data and RTPSReaders to receive data, all centered around Topics that label exchanged data. For further information, go to :ref:`what_is_rtps`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- diff --git a/docs/fastdds/faq/rtps_layer/rtps_layer.rst b/docs/fastdds/faq/rtps_layer/rtps_layer.rst index de7f36aa9..0a13558e0 100644 --- a/docs/fastdds/faq/rtps_layer/rtps_layer.rst +++ b/docs/fastdds/faq/rtps_layer/rtps_layer.rst @@ -50,4 +50,14 @@ RTPS LAYER Frequently Asked Questions In the RTPS Protocol, Readers and Writers save the data about a topic in their associated Histories. Each piece of data is represented by a Change, which *eprosima Fast DDS* implements as ``CacheChange_t``. Changes are always managed by the History. For further information, see :ref:`rtps_layer`. +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + +.. collapse:: How can a custom Payload Pool improve the performance of Writers and Readers in RTPS, and what should be considered when implementing one? + + + + |br| + + A custom payload pool can improve the performance of writers and readers in RTPS by optimizing memory usage and reducing costly memory allocation operations, especially when dealing with large or variable-sized data. When implementing one, it is essential to ensure that the payload size accommodates the serialized user data plus metadata, and to choose a strategy (e.g., preallocated, dynamic) that balances memory usage and allocation efficiency based on application needs. For further information, see :ref:`rtps_layer_custom_payload_pool`. + | diff --git a/docs/fastdds/faq/transport_layer/transport_layer.rst b/docs/fastdds/faq/transport_layer/transport_layer.rst index 3d1e7f709..373780bfd 100644 --- a/docs/fastdds/faq/transport_layer/transport_layer.rst +++ b/docs/fastdds/faq/transport_layer/transport_layer.rst @@ -55,7 +55,7 @@ Transport API |br| - A ``Locator_t`` uniquely identifies a communication channel with a remote peer for a particular transport. For further information, see :ref:`transport_transportApi_locator`. + A ``Locator_t`` uniquely identifies a communication channel with a remote peer for a particular transport. For example, on UDP transports, the Locator will contain the information of the IP address and port of the remote peer. The listening Locators, such as Multicast locators(listen to multicast communications), Unicast locators(listen to unicast communications), Metatraffic locators(used to receive metatraffic information, usually used by built-in endpoints to perform discovery), User locators(used by the endpoints created by the user to receive user Topic data changes), are used to receive incoming traffic on the DomainParticipant. For further information, see :ref:`transport_transportApi_locator`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -181,7 +181,7 @@ TCP Transport |br| - The |TCPv4TransportDescriptor-api| must indicate its public IP address in the "wan_addr" data member. On the client side, the DomainParticipant must be configured with the public IP address and "listening_ports" of the TCP as Initial peers. + The |TCPv4TransportDescriptor-api| must indicate its public IP address in the "wan_addr" data member. On the client side, the DomainParticipant must be configured with the public IP address and "listening_ports" of the TCP as Initial peers. For further information, see :ref:`transport_tcp_wan`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- @@ -196,6 +196,16 @@ TCP Transport ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- +.. collapse:: What are the advantages and disadvantages of using TCP transport compared to UDP transport? + + + + |br| + + TCP offers reliable data transmission with built-in mechanisms for error detection, correction, and ordered delivery of packets, ensuring data integrity and sequencing. It also includes flow control to adjust the transmission rate between sender and receiver. However, this reliability comes with increased overhead, resulting in slower transmission speeds and higher latency compared to UDP. UDP, on the other hand, is faster with lower latency, as it does not ensure packet delivery, order, or perform error correction, making it ideal for real-time applications like video streaming or gaming, but less reliable for applications requiring data accuracy. + +------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Shared Memory Transport ----------------------- @@ -311,7 +321,7 @@ Data Sharing Delivery |br| - No, it does not prevent data copies. For further information, see :ref:`datasharing-delivery`. + No, it does not prevent data copies. Data-sharing helps avoid copies between the DataWriter and DataReader by using shared memory for communication, but data still needs to be copied between the application and the DataWriter or DataReader. To avoid these copies, a different mechanism, such as Zero-Copy communication, is required. For further information, see :ref:`datasharing-delivery`. ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- diff --git a/docs/spelling_wordlist.txt b/docs/spelling_wordlist.txt index 90cdf2a3b..330454347 100644 --- a/docs/spelling_wordlist.txt +++ b/docs/spelling_wordlist.txt @@ -319,4 +319,5 @@ verbosities segmentId addr faq +reachability