Jump to content
OpenSplice DDS Forum

Search the Community

Showing results for tags 'partition'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • About OpenSplice DDS
    • News and Events
    • Site Feedback
  • Using OpenSplice DDS
    • General
    • Troubleshooting
  • Understanding OpenSplice DDS
    • OpenSplice DDS Slideshows
    • OpenSplice DDS Movies
  • Community
    • Project: SIMD
    • Project: CamelOS
    • Project: DDS Touchstone
    • Project: RESTful DDS
    • Mac OS X

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests


Company

Found 3 results

  1. guy.volckaert@meggitt.com

    Mapping a partition to a channel

    Hi, I would like some help modifying my ospl.xml file to essentially associated a partition name with a specific channel. In essence, I need to add a new network channel (i.e. multicast address) to the ospl.xml file. This new channel would be dedicated for specific types of topics. I was planning to publish those specific topics via a unique partition name (for example: "session.group01"). When I publishing a topic using this partition name, it would be published through the new channel I added. Can someone provide an example ospl.xml that illustrates how to do that. I read the "OpenSplice_Deployment.pdf" document but I still don't know what needs to be done. BTW: I also attached the ospl.xml files that I'm currently using. Thanks, ospl.xml
  2. I'm trying to come up with a cookbook approach for supporting a huge network over a geographically dispersed area. I have a fake example and some ideas for configuring DDS. I am a bit new to this so all comments will be welcome. Imagine a chain store with three nodes per store: energy, security, and automation. Each node has a globally unique ID (a GUID) and they all know their "Store ID" which is also a GUID. Each node is set up to subscribe to samples sent out by the others in the store, and they publish their own status as well. Sometimes they want to publish these same samples to a cloud-based headquarters server which in turn is aware of some large number of stores, say, 100,000. Therefore I have 300,000 nodes in this example. Each of the nodes is running the durability service. This is to allow a disconnected node to be rebooted and have any samples published since the network first went down to get aligned. How does one make sure that all samples from all nodes are not sent over UDP to all 3 x 100,000 nodes? How can we contain some samples within a store? I don't ever want a master aligner in a store to service headquarters, never mind all the other stores. I'm thinking about a few alternatives and I do knot know enough about DDS to judge which is best. Idea 1: In ospl.xml use <Discovery> to limit traffic by <Peer address= where all the nodes in a store only know about each other and headquarters. Idea 2: In ospl.xml, configure the durability service to do the aligning (so we get those samples published during the network outage, before a reboot.) a. use <Role> for each store (using a GUID string) and a different <Role> for "headquarters" . b. In the store's ospl.xml, do something like: <NameSpace name="Default"> <Partition>StatusUpdate*</Partition> </NameSpace> <Policy alignee="Initial" aligner="true" durability="Durable" nameSpace="Default"> <Merge scope="Headquarters" type="Ignore"/> <Merge scope="StoreGuiD_XXXXXXXXXXXXXXXXX" type="Merge"/> </Policy> c. In headquarters ospl.xml merge everything <Policy alignee="Initial" aligner="true" durability="Durable" nameSpace="Default"> <Merge scope="CSG" type="Merge"/> <Merge scope="StoreGuiD_XXXXXXXXXXXXXXXXX" type="Merge"/> </Policy> I think the durabilty service in headquarters will get all the data but the durability service on any node inside a store only has store data. Major problem: Can I really put 100,000 <Merge scope="StoreGuiD_XXXXXXXXXXXXXXXXX" type="Merge"/> entries in an ospl.xml? Idea 3: Could I use partitions, and perhaps a hierarchical naming convention for partitions, to sort out network traffic? Say, if Headquarters knows the name (GUID) of a store it could use "Store_123*" for the partition and talk to all the nodes in that store. A node in a store would use Store_123 as the partition so that data could not the store. Do any of these make sense? Where am I going wrong or right?
  3. I'm trying to configure an unicast reliable channel between two nodes on different networks. There's no multicast between them and no broadcast. I'm using Community 5.5.1 Whilst I have managed to send packets between them, it has been impossible to "resend" packets, since when a packet is lost is resent via broadcast or multicast, which means they never reach the other node. I have tried the following configurations: 1) <GlobalPartition Address="UnicastAddress1" MulticastTimeToLive="32"/> In this case on networking.log I can see: 1385604878.204 Construction (4) Incarnated network, currently 1 incarnations active 1385604878.204 Test (4) Using broadcast address 192.168.1.255 for default partition 1385604878.207 Test (1) Creation of sending socket "Channels/Channel[@name=Reliable]" succeeded. 1385604878.208 Test (3) Adding address expression "UnicastAddress1" to partition 0 1385604878.208 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 1385604878.209 Test (4) Using broadcast address 192.168.1.255 for default partition 1385604878.209 Test (1) Creation and binding of receiving socket "Channels/Channel[@name=Reliable]" succeeded. 1385604878.209 Test (3) Adding address expression "UnicastAddress1" to partition 0 1385604878.209 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 A "default partition" with a broadcast address is created. This partition is used for resending packets. I really cannot understand this behavior, because if I would have wanted to use broadcast I would have put it on the configuration. If the product needs a "default partition" it should use the GlobalPartition provided. 2) <GlobalPartition Address="MulticastAddress1, UnicastAddress1" MulticastTimeToLive="32"/> In this case the networking.log is a bit different: 1385985564.119 Construction (4) Incarnated network, currently 1 incarnations active 1385985564.121 Test (3) Adding address expression "MulticastAddress1, UnicastAddress1" to partition 0 1385985564.121 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385985564.122 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 1385985564.128 Test (1) Creation of sending socket "Channels/Channel[@name=Reliable]" succeeded. 1385985564.121 Test (3) Adding address expression "MulticastAddress1, UnicastAddress1" to partition 0 1385985564.121 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385985564.122 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 No "default partition" is created. ONLY MulticastAddress1 is used for resending packets, ignoring UnicastAddress1 which is also in the partition 0 As a side note the following configuration: <GlobalPartition Address="UnicastAddress1, MulticastAddress1" MulticastTimeToLive="32"/> reverts to the first case. 3) <GlobalPartition Address="MulticastAddress1" MulticastTimeToLive="32"/> <NetworkPartitions> <NetworkPartition Address="UnicastAddress1" Compression="false" Connected="true" MulticastTimeToLive="32" Name="PART"/> </NetworkPartitions> <PartitionMappings> <PartitionMapping DCPSPartitionTopic="PART_CONTROL*.*" NetworkPartition="PART"/> <PartitionMapping DCPSPartitionTopic="PART_DATA*.*" NetworkPartition="PART"/> </PartitionMappings> The networking.log 1385642520.034 Test (2) Read networking partition (PART,UnicastAddress1,Connected) 1385642520.043 Test (1) Creation of sending socket "Channels/Channel[@name=Reliable]" succeeded. 1385642520.044 Test (3) Adding address expression "MulticastAddress1" to partition 0 1385642520.044 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385642520.044 Test (3) Adding address expression "UnicastAddress1" to partition 1 1385642520.044 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 1 1385642520.047 Test (1) Creation and binding of receiving socket "Channels/Channel[@name=Reliable]" succeeded. 1385642520.044 Test (3) Adding address expression "MulticastAddress1" to partition 0 1385642520.044 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385642520.044 Test (3) Adding address expression "UnicastAddress1" to partition 1 1385642520.044 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 1 With this configuration there is another problem: The packets and ACK are sent and received by each node networking process, but the data never reach the applications. There may be something wrong in application QoS, but with the same QoS the data reached the applications in the previous cases. MulticastAddress1 is used for resending packets. Any help would be appreciate. Rodolfo.
×