These notes document the current state of release of the OpenSplice DDSI2 Service. The DDSI2 service is an compliant implementation of the OMG DDSI 2.3 specification for Vortex OpenSplice.
There is a solid body of evidence that there is real interoperability between OpenSplice and other vendors' implementations, and in particular with RTI DDS. Nevertheless, there are still some areas that have seen minimal interoperability testing at best. We kindly invite anyone running into interoperability issues to contact us, either via the OpenSplice forum, or, for our commercial customers, via our support channels.
Those interested in testing interoperability by running the same applications used at the "OMG Interoperability Demonstrations" can download the full package here.
Please note that this section may not be exhaustive.
For an overview of QoS settings, see QoS compliancy.
QoS changes are not supported.
Limited influence on congestion-control behaviour.
If DDSI2 is operated in its default mode where each participant has its own UDP/IP port number, the maximum number of participants on a node serviced by an instance of the DDSI2 service is limited to approximately 60, exceeding this limit will cause the DDSI2 service to abort. It appears this mode is only required for interoperability with TwinOaks CoreDX DDS. There is never a limit on the number of remote participants.
No support for inlining QoS settings yet. DataReaders requesting inlined QoS will be ignored.
Running DDSI2 in parallel to the native networking may impact the performance of the native networking even when DDSI2 is not actually involved in the transmission of data, as DDSI2 still performs some processing on the data.
No more than 32 key fields, and the concatenated key fields may not require more than 32 bytes of storage, where strings count for 4 bytes.
When multicast is enabled and a participant is discovered that advertises a multicast address, it is assumed to be reachable via that multicast address. If it is not, then it must currently be operated in multicast-disabled mode with all possible peer nodes listed explicitly, as this will restrict the set of addresses advertised by the participant to its unicast address.
The following table lists the level of support for each QoS. In some cases, compliancy is better when the DDSI2 service is used to connect two OpenSplice nodes than when it used to connect an OpenSplice node with another vendor's DDS implementation. The OpenSplice kernel performs many aspects of DDS in ways independent of the underlying wire protocol, but interoperating with another vendor's DDS implementation requires the DDSI2 service to fully implement the mapping prescribed by the DDSI 2.3 specification. This work has not been completed yet.
QoS | OpenSplice | Other vendor |
---|---|---|
USER_DATA | Compliant | Compliant |
TOPIC_DATA | Compliant | Compliant |
GROUP_DATA | Compliant | Compliant |
DURABILITY | Compliant, but see Issues rooted in the standard | |
DURABILITY_SERVICE | Compliant | Compliant |
PRESENTATION | Compliant | Compliant, access scope GROUP extensions not yet defined in the standard. |
DEADLINE | Compliant | Compliant |
LATENCY_BUDGET | Compliant | Compliant |
OWNERSHIP | Compliant | Shared ownership: fully supported; exclusive ownership: partially supported, a higher-strength writer can take ownership but failover to a lower-strength one may not occur. |
OWNERSHIP_STRENGTH | Compliant | Compliant |
LIVELINESS | Compliant | All entities treated as if liveliness is AUTOMATIC. For OpenSplice participants, the lease duration is fixed at 11s, for readers and writers at infinity. Lease durations of remote participants, readers and writers are honoured correctly. |
TIME_BASED_FILTER | Compliant, except that all there is no filtering to limit the rate with which samples are delivered to the reader. | |
PARTITION | Compliant | Compliant |
RELIABILITY | Compliant | Compliant |
TRANSPORT_PRIORITY | Compliant | Compliant |
LIFESPAN | Compliant | Compliant |
DESTINATION_ORDER | Compliant | Compliant |
HISTORY | Compliant, except that the writer history for a DataWriter of transient-local durability is always maintained as if the history setting is KEEP_LAST with depth 1 | |
RESOURCE_LIMITS | Compliant | Compliant |
ENTITY_FACTORY | Compliant | Compliant |
WRITER_DATA_LIFECYCLE | Compliant | Compliant |
READER_DATA_LIFECYCLE | Compliant | Compliant |
The specification only deals with volatile and transient-local data, and leaves the behaviour for transient and persistent data undefined. Many OpenSplice applications follow the recommendation to use transient data and not transient-local data, and indeed, OpenSplice implements transient-local as transient. This evidently creates a complex situation for a DDSI implementation.
The following two tables aim to provide an overview of the expected behaviour when both sides are using OpenSplice, and when only one side is.
OpenSplice writer:
Writer QoS | Reader QoS | Behaviour |
---|---|---|
all | volatile | as expected |
transient-local | transient-local | DDSI2 will internally manage a writer history cache containing the historical data for a history setting of KEEP_LAST with depth 1 (note that this is the default for writers). The data will be advertised in accordance with the specification and new readers receive the old data upon request. An OpenSplice reader will also receive the data maintained by the OpenSplice durability service. |
transient | transient-local | A remote reader on OpenSplice will receive transient data from the OpenSplice durability service, but a remote reader on another vendor's implementation will not. |
transient | same as previous case | |
persistent | all | deviations from the expected behaviour are the same as for transient |
Non-OpenSplice writer, OpenSplice reader:
Writer QoS | Reader QoS | Behaviour |
---|---|---|
all | volatile | as expected |
transient-local | transient-local | The reader will request historical data from the writer, and will in addition receive whatever data is stored by the OpenSplice durability service. |
transient | transient-local | The reader may or may not receive transient data from the remote system, depending on the remote implementation. It will receive data from the OpenSplice durability service. The durability service will commence storing data when the first reader or writer for that topic/partition combination is created by any OpenSplice participant (i.e., it is immaterial on which node). |
transient | same as previous case | |
persistent | all | deviations from the expected behaviour are the same as for transient |
Once the specification is extended to cover transient data, the situation will become much more straightforward. In the meantime it may be possible to make more configurations work as expected. The specification process is currently actively exploring the alternatives.
No verification of topic consistency between OpenSplice and other vendors' implementations. The specification leaves this undefined. OpenSplice-to-OpenSplice the kernel will detect inconsistencies.
The specification of the format of a KeyHash is ambiguous, in that one can argue whether or not padding should be used within a KeyHash to align the fields to their natural boundaries. The DDSI2 service currently does not insert padding, as this has the benefit of allowing more complex keys to be packed into the fixed-length key hash. It may be that this is not the intended interpretation.
<DDSI2EService name="ddsi2e"> <Discovery> <DefaultMulticastAddress>232.3.1.3 </Discovery> <!-- this ensures that readers for data in partitions A and B will favour SSM, and that writers for data in partitions A and B will provide data via SSM, via addresses 232.3.1.4 and 232.3.1.5, respectively --> <Partitioning> <NetworkPartitions> <NetworkPartition name="ssmA" address="232.3.1.4"/> </NetworkPartitions> <PartitionMappings> <PartitionMapping DCPSPartitionTopic="A.*" NetworkPartition="ssmA" /> </PartitionMappings> </Partitioning> </DDSI2EService>