Jump to content
OpenSplice DDS Forum

Hans van 't Hag

Moderators
  • Content count

    317
  • Joined

  • Last visited

Everything posted by Hans van 't Hag

  1. Hans van 't Hag

    How to detect DDSi2 service has not started?

    Hi Joffrey, Sorry for the delayed response, we had issues with the domain-registration of this forum and have been offline for over a month .. W.r.t. your question for a programmatic way to detect certain errors/warnings, we have a feature called a 'reportPlugin' which is documented in the deployment manual and which allows for a user-library to be plugged-in and that can access any info/warning/error message that (normally/also) goes to the ospl_info and ospl_error log(s). Perhaps that would help you creating the awareness that something went wrong w.r.t. service-configuration and/or interfaces not being available. Regards, Hans
  2. I'd check out "Eclipse Cyclone DDS" (https://projects.eclipse.org/projects/iot.cyclonedds) which we recently donated to the Eclipse foundation and which is a low-footprint implementation of DDS. Don't think its been ported to arduino yet though .. but we of course invite the community to participate in this (new) project. Cheers, Hans
  3. Hans van 't Hag

    register_instance vs lookup_instance

    Hi, When writing the same instance (i.e. key-value) repeatedly (but with different non-key field values), its fine to utilize the same handle. The handle is a local-concept (so can't be shared between participants) and can improve performance when writing specific instances from large (key-)sets. Note also that its not mandatory to use the register_instance, if you don't explicitly register the instance, it will be done implicitly yet costs a little time to locate the 'right instance' based on the value of the key-fields in the provided sample. Note also that an error will be raised when the key-fields of any provided sample in a write doesn't match with the instance-handle (if provided) Hope this helps, -Hans
  4. Hans van 't Hag

    Number / Last Participant

    Hi, Another option is to exploit the notion of 'built-in-topics' in DDS that allow you to 'discover' any participant/publisher/subscriber/topic in the system by simply reading this meta-data from these pre-defined topics. In the OpenSplice examples-section I believe there is an example on how to do that .. Regards, Hans
  5. Hans van 't Hag

    CMSOAP Interface

    Hi, The CMSOAP interface is an internal interface used by our Tuner/Tester tools (that allows these tools to 'connect' to any remotely deployed DDS-system as long as there's one node in that remote DDS-system that has our 'soap-service' configured as one of the pluggable services of OpenSplice. It is not related to the DDS-WEB specification Regards, Hans
  6. Hans van 't Hag

    helloworld standalone example memory leak

    Yeah, I remember that the msgId is a 'key' in the topic so you're creating as many instances as you have key-values. You could try unregistering the instance after you've written it.
  7. Hans van 't Hag

    helloworld standalone example memory leak

    Hi, It might help if you could share the modified code. What might have happened is that you're creating as many instances (i.e. key-values) as you're creating samples and then perhaps don't unregister() or take() the samples .. in which case the administration-overhead related to any 'instance' might explain your observation. Regards, Hans
  8. Hans van 't Hag

    Mapping a partition to a channel

    Hi Guy, I think you're confusing 2 concepts: network-channels (that relate to a range of transport-priorities) and network-partitions (that relate to a set of logical DDS-partitions). In your case it seems like you're looking for 'network-partitions' rather than 'network-channels'. Currently both 'mapping' features (i.e. mapping of logical QoS's such as TRANSPORT_PRIORITY and/or PARTITION to physical constructs such as network-channels and network-partitions) are part of our 'extended' DDSI2E service thats currently not part of our community edition. Note however that these are purely non-functional optimizations so even without those, functionally you should be 'fine' when 'just' using logical DDS-partitions to 'group' your topics and their connectivity. The good news is that DDSI2E and its mapping features will become available soon when we are going to opensource our full product under the Eclipse foundation (working title: Eclipse Cyclone). In the mean-time you could try-out our commercially supported version to see if it suits your needs via our website (see opensplice download). Regards, Hans
  9. Hans van 't Hag

    Multitopic in isocpp2

    Yes, our IDLParser(s) do support unions. What languages do you envision ? I'll start some discussions internally on the above and see what we could do to help in the short-term as proper multi-topic support isn't on the horizon yet.
  10. Hans van 't Hag

    Multitopic in isocpp2

    Hi Emmanuel, You're right in that MultiTopics are indeed targeting that functionality and also in your observation that this functionality isn't supported yet (actually by none of the current DDS-vendors). The reason behind not having implemented this (yet) is that there are still gaps in the semantics in case of incomplete information in 2 situations: (1) when should you first get a joined-set (i.e. only when its complete w.r.t. availability of all the attributes from all topics that you're interested in (2) when should you last get updates on a joined-set (i.e. up to the point where at least one of the attributes has reached the end of its lifecycle e.g. is part of a disposed instance) That said, its not rocket-science to do such a joint on application-level (as we ARE following a relational model isn't it) .. and apply some basic semantics for the above .. We're still looking at some good (business-)cases that would justify investment in that piece of missing spec-coverage so perhaps you can contribute yours
  11. Hans van 't Hag

    Communicate betwwen different subnets

    I suspect that there's an issue with multicasting being allowed between your subnets. You could perhaps try and set-up DDSI not to use multicast (AllowMulticast=false) and explicitly specify the peers under Discovery/Peers/Peer
  12. Hans van 't Hag

    Vortex DDS on Raspberry pi

    Hi Jeremy, If you're looking for our community-edition, there are pre-built version for Raspberry pi (DDS Community Edition Version 6.7 for Raspberry Pi Host and Target, Debian Linux, gcc 4.9.2, ARM v6l) available here: http://www.prismtech.com/dds-community/software-downloads If you're looking for the latest commercially supported Vortex OpenSplice installer (6.8) for Raspberri PI, please contact us here: http://www.prismtech.com/contact-us You can move whichever version to the SD-card of the pi .. that shouldn't be a problem .. just make sure that the 'release.com' which you have to source before usage sets the right paths for the OSPL_HOME and OSPL_URI environment variables. Thanks, Hans
  13. Hans van 't Hag

    TTopicImpl.hpp Line 175 Bus Error

    Could you share the code so we could reproduce ? Thanks, Hans
  14. Hans van 't Hag

    Compilation fails with gcc/g++ 5.3.1 (ubuntu 16.04)

    You can also take a look here: http://forums.opensplice.org/index.php?/topic/2602-build-opensplicedds-v64-from-source-error-on-unbuntu-1504/ Which references the page: http://ros2.xyz/blog/2015/09/16/building-opensplicedds-v6-4-on-ubuntu15-04/ We will also update our website w.r.t. these instructions .. sorry for the inconvenience Regards, Hans
  15. Hans van 't Hag

    6.7 and apache license

    Hi Bud, Yes being (even) more permissive was the driver for our change towards Apache license 2.0 as especially the combination of LGPLv3 and Apache was troublesome for some of our community users. Regards, Hans
  16. Hi, DestinationOrder is a RxO QoS so you must assure that you've set the same policy both on the writer and the reader. If you've only set it for the reader, then there's a QoS-mismatch that indeed results in no communication happening (anymore). Apart from that, ordering by SourceTimestamp only orders samples for the same instance (key-value). I you require ordering of samples over multiple instances within a topic or even ordering of samples over a group of topics, then you need to use the 'PRESENTATION' QoS policies (w.r.t. both 'ordered-access' as well as access-scope 'TOPIC' or 'GROUP'). Please note that the current community-edition doesn't support anything beyond the default PRESENTATION QoS -settings (access_scope 'INSTANCE') yet our upcoming community-update in June DOES support full PRESENTATION QoS
  17. Hans van 't Hag

    DDS Community download links are not working!

    Hi Sean, We should be up again now .. sorry for any caused inconvenience! Regards, Hans
  18. Hans van 't Hag

    DDS Community download links are not working!

    Hi, We're in the middle of moving offices .. downloads will be back-up again shortly Thanks, Hans
  19. Hans van 't Hag

    What if I want to forward topic sample to the cloud (MQTT)?

    Hi, OpenSplice indeed allows to specify (extensible) data-structures with Google Protobuf instead of IDL and that included the ability to extract the protobuf 'blob' (rather than individual attributes) at you reader-side whereafter you could send that blob to the cloud using any technology and then 'digest' using the appropriate protobuf definition (the same that was used at the writer where it was 'wrapped' into a DDS-topic for efficient sharing in the 'DDS-domain'). Note that besides Vortex OpenSplice we also have products (Vortex Fog and Vortex Cloud, see http://www.prismtech.com/vortex ) that allow to transparently (and securely) 'extend' the DDS 'backbone' from a LAN to any private/public cloud (realized by a combination of dynamic-discovery over the WAN and selective routing of data for which there is remote interest from the UDP/multicast-LAN to the typically TCP/SSL based WAN). Hope this helps, -Hans
  20. Ok, I think I understand the source of the confusion: QoS-policies defined on topic-level do not automatically get 'transferred' to the policies of readers/writers. They are there to serve as 'defaults' but they have to be explicitly copied (copy-from-topic-qos API's). The reason is that the guys that come-up with the data-model i.e. the topics are often also the domain-experts that can 'reason' about those QoS's that define global-behavior such as durability, reliability, urgency, importance. So if they define the proper topic-level QoS's then individual appliation-programmers that eventually write/read samples of those topics can 'inherit' that knowledge by re-using those policies as defined on the topic-level. In your case, although you have specified a KEEP_ALL history-policy on the topic-level, that wasn't 'effectuated' for your reader which is still using the default KEEP_LAST policy with a depth of 1. Since a reader-history is more about local-behavior than system-wide behavior, it usually isn't specified on topic-level, unless your topic is non-volatile i.e. TRANSIENT or PERSISTENT, in which case you MUST specify the DURABILITY_SERVICE QoS policies on the topic-level (i.e. history-kind, history-depth and resource-limits). So for your example there's 2 things to do: 1. specify the DURABILITY_SERVICE policies for you topic so that the middleware knows how to 'retain' those non-volatile samples 2. specify a (matching) history-policy for your reader so that sufficient historical samples are maintained in your reader's history-cache Typically I'd suggest a KEEP_LAST policy with a sufficient depth rather than using a KEEP_ALL policy as KEEP_ALL will cause end-to-end flow-control in case resource-limits are reached (or if you don't specify resource-limits, you can run out of memory if you're not careful with your instance-management and/or disposing stale data) Hope this helps.
  21. Hans van 't Hag

    Is it possible to make persistent queue for the reader only?

    W.r.t. setting-up the QoS-policies: You're right I overlooked the settings of the default publisher QoS .. sorry
  22. Hans van 't Hag

    OpenSplice Liveliness QoS policy does not work

    If you do a hard-kill using the task-manager, then the liveliness-change relies on the domain's lease-duration which is 10 sec. by default (you can adapt that in the xml-configuration on domain-level) If you do a soft-kill like control-C, the liveliness-change will be communicated immediately (as part of the exception-handling of the terminating application). The domain-level Lease can be adapted from its default 10 seconds to 1 second in the xml-configuration on Domain-level like this: <Lease> <ExpiryTime update_factor="0.2">1.0</ExpiryTime> </Lease> Note that this also implies higher-frequency heartbeats (you could 'up' the update-factor to 0.4 for that)
  23. Hans van 't Hag

    Is it possible to make persistent queue for the reader only?

    some notes/suggestions: if you want to decouple the lifecycle of the data from that of the application(s) you should exploit DDS's durability-support for TRANSIENT and/or PERSISTENT data in order for that to work, you should set the TOPIC-level durability-QoS to either TRANSIENT or PERSISTENT (and configure a persistent-store location in the middleware configuration 'xml') now its up to a writer to write samples either as volatile or non-volatile data, but only non-volatile/durable (i.e. TRANSIENT or PERSISTENT) data will be preserved by the middleware's durability-services (of which you need at least 1 instance). Our community-edition doesn't support the federated-deployment where you could run a 'federation' somewhere (even without applications) who's durability-service would maintain durable data as well as provides late-joiners with historical data, but each 'single-process' application 'embeds' a durability-service 'out-of-the-box' so you'd need to have/keep at least 1 application running now late-joining applications that create a TRANSIENT or PERSISTENT reader will receive historical data from the middleware (don't forget to call wait_for_historical_data() to get synchronized with the delivery of that non-volatile data to your late-joining reader) Furthermore I noticed that you didn't create a default writer-QoS object but instead used the topic-QoS's for your writer .. thats not good practice as the topic-level QoS's don't provide for all writer (or reader for that matter) QoS's .. so best to follow the pattern of all our examples where you first create a set of default QoS's for your reader/writer, then copy the topic-level QoS's (copy_from_topic_Qos) to modify those QoS's for which you've created (system-wide) values on topic-level. Also noting that as durability operates independent from readers/writers, you have to 'notify' the required durability behavior to the middleware by registering the non-volatile topic(s) with the correct QoS policies for DURABILITY_SERVICE (i.e. history-kind, depth, resource limits and cleanup-delay)
  24. Hi, I'd have guessed that the 'real' issue is that you wrote both messages with the same userID value '1'. Your solution to add the payload itself as an extra key might 'bypass' that error, but its not a very efficient solution -Hans
  25. Hans van 't Hag

    Assign Values for Enums?

    Hi, Assigning values to enums was just not part of the IDL (3) specification. With the introduction of IDL 4 (support of which is on our roadmap) this will become feasible.
×