Jump to content
OpenSplice DDS Forum

Search the Community

Showing results for tags 'NETWORKING'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • About OpenSplice DDS
    • News and Events
    • Site Feedback
  • Using OpenSplice DDS
    • General
    • Troubleshooting
  • Understanding OpenSplice DDS
    • OpenSplice DDS Slideshows
    • OpenSplice DDS Movies
  • Community
    • Project: SIMD
    • Project: CamelOS
    • Project: DDS Touchstone
    • Project: RESTful DDS
    • Mac OS X

Calendars

  • Community Calendar

Found 5 results

  1. PrismTech’s Vortex OpenSplice Selected by Fujitsu for 1FINITY Networking Platform - Vortex OpenSplice will help enable a data-centric provisioning framework for new Fujitsu product Boston, MA, USA – September 14, 2015 – PrismTech™, a global leader in software platforms for distributed systems, today announced that Fujitsu has selected Vortex™ OpenSplice™ as its data-centric middleware solution for a new provisioning framework for its revolutionary 1FINITY™ Networking Platform. This new category of network equipment spans access, packet and optical technologies. The revamped provisioning solution supports the Fujitsu SDN/NFV architecture, building upon open industry standards such as NETCONF and YANG. The vital task of robust data connectivity between platform components is provided by PrismTech’s Vortex OpenSplice, a product proven in numerous business and mission-critical environments in markets such as Aerospace & Defense, Smart Energy, Industrial Automation, Transportation, Healthcare and Telecommunications. Vortex OpenSplice enables decoupling between interfaces, making the provisioning framework easier to maintain and extend. Loose coupling will help reduce system complexity and make the integration of new components much easier. It will also help enable applications that are portable across products. Fujitsu expects to be able to make more frequent product updates and release new features to market more quickly, something that is key to maintaining market leadership in a highly competitive and fast evolving market. Once deployed, the new provisioning framework will allow the next generation of network equipment based on the 1FINITY platform to be configured more quickly and easily, with the benefit that providers will be able to provision new services more efficiently, responding to the changing needs of their end users in a much more responsive way. “PrismTech is proud to be a partner of Fujitsu and supports their agile services network vision. Fujitsu is once again shaping the future of trusted communication networks whilst maintaining the highest standards of development and release management; we are delighted that they have selected PrismTech’s Vortex to be part of their innovative vision,” said Spiros Motsenigos, VP US Sales, PrismTech. Vortex OpenSplice is a key component of PrismTech’s Vortex Intelligent Data Sharing Platform, a suite of interoperable Internet of Things (IoT) enabling technologies based on the Object Management Group’s (OMG) Data Distribution Service (DDS) for Real-time Systems standard. Vortex provides secure, real-time data connectivity independent of network configuration (LAN or WAN, wired, wireless, mobile) or underlying platform technologies. It can be used by applications to enable seamless data sharing across server, embedded, web, mobile or cloud environments. http://www.prismtech.com/news/prismtech-vortex-opensplice-selected-fujitsu-1finity-networking-platform
  2. Hi All, OpenSpliceDDS not connecting (not working) in WAN I am running OpenSpliceDDS in public IP 'x.x.x.x' and i am trying to connect with diffent subnet ip 'x.x.y.y' with following configuration, Note: Both machines having single ethernet so OpenSpliceDDS will take the same Server Side <General> <NetworkInterfaceAddress>AUTO</NetworkInterfaceAddress> <AllowMulticast>true</AllowMulticast> <EnableMulticastLoopback>true</EnableMulticastLoopback> </General> <Partitioning> <GlobalPartition Address="multicast"/> </Partitioning> Client Side <General> <NetworkInterfaceAddress>AUTO</NetworkInterfaceAddress> </General> <Partitioning> <GlobalPartition Address="x.x.x.x"> </Partitioning> <Discovery Scope="*.*" enabled="true"> <PortNr>54120</PortNr> <ProbeList>x.x.x.x</ProbeList> </Discovery> Please help me to resolve the issue. Thanks -Viswa
  3. Fragment Buffers

    In a test system that has been running for a number of weeks, I am now frequently getting an error that is causing the networking service to terminate: Description : Channel "Channels/Channel[@name='BestEffort']" has no more fragmentbuffers available, An incoming fragmented message is too large to fit in the configured fragmentbuffers. Increase the size of the fragmentbuffers (config://NetworkService/Channels/Channel/FramentSize)and/or the maximum number of fragmentbuffers to use (config://NetworkService>Channels>Channel>Receiving>DefragBufferSize). Terminating Networking service I had originally set DefragBufferSize to the default of 5000 but am now looking at increasing it. What I am slightly worried about is whether or not this will actually fix my problem or whether it will just mask an underlying issue of incoming data not being processed in time. Any advice?
  4. I'm trying to configure an unicast reliable channel between two nodes on different networks. There's no multicast between them and no broadcast. I'm using Community 5.5.1 Whilst I have managed to send packets between them, it has been impossible to "resend" packets, since when a packet is lost is resent via broadcast or multicast, which means they never reach the other node. I have tried the following configurations: 1) <GlobalPartition Address="UnicastAddress1" MulticastTimeToLive="32"/> In this case on networking.log I can see: 1385604878.204 Construction (4) Incarnated network, currently 1 incarnations active 1385604878.204 Test (4) Using broadcast address 192.168.1.255 for default partition 1385604878.207 Test (1) Creation of sending socket "Channels/Channel[@name=Reliable]" succeeded. 1385604878.208 Test (3) Adding address expression "UnicastAddress1" to partition 0 1385604878.208 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 1385604878.209 Test (4) Using broadcast address 192.168.1.255 for default partition 1385604878.209 Test (1) Creation and binding of receiving socket "Channels/Channel[@name=Reliable]" succeeded. 1385604878.209 Test (3) Adding address expression "UnicastAddress1" to partition 0 1385604878.209 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 A "default partition" with a broadcast address is created. This partition is used for resending packets. I really cannot understand this behavior, because if I would have wanted to use broadcast I would have put it on the configuration. If the product needs a "default partition" it should use the GlobalPartition provided. 2) <GlobalPartition Address="MulticastAddress1, UnicastAddress1" MulticastTimeToLive="32"/> In this case the networking.log is a bit different: 1385985564.119 Construction (4) Incarnated network, currently 1 incarnations active 1385985564.121 Test (3) Adding address expression "MulticastAddress1, UnicastAddress1" to partition 0 1385985564.121 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385985564.122 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 1385985564.128 Test (1) Creation of sending socket "Channels/Channel[@name=Reliable]" succeeded. 1385985564.121 Test (3) Adding address expression "MulticastAddress1, UnicastAddress1" to partition 0 1385985564.121 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385985564.122 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 0 No "default partition" is created. ONLY MulticastAddress1 is used for resending packets, ignoring UnicastAddress1 which is also in the partition 0 As a side note the following configuration: <GlobalPartition Address="UnicastAddress1, MulticastAddress1" MulticastTimeToLive="32"/> reverts to the first case. 3) <GlobalPartition Address="MulticastAddress1" MulticastTimeToLive="32"/> <NetworkPartitions> <NetworkPartition Address="UnicastAddress1" Compression="false" Connected="true" MulticastTimeToLive="32" Name="PART"/> </NetworkPartitions> <PartitionMappings> <PartitionMapping DCPSPartitionTopic="PART_CONTROL*.*" NetworkPartition="PART"/> <PartitionMapping DCPSPartitionTopic="PART_DATA*.*" NetworkPartition="PART"/> </PartitionMappings> The networking.log 1385642520.034 Test (2) Read networking partition (PART,UnicastAddress1,Connected) 1385642520.043 Test (1) Creation of sending socket "Channels/Channel[@name=Reliable]" succeeded. 1385642520.044 Test (3) Adding address expression "MulticastAddress1" to partition 0 1385642520.044 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385642520.044 Test (3) Adding address expression "UnicastAddress1" to partition 1 1385642520.044 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 1 1385642520.047 Test (1) Creation and binding of receiving socket "Channels/Channel[@name=Reliable]" succeeded. 1385642520.044 Test (3) Adding address expression "MulticastAddress1" to partition 0 1385642520.044 Test (4) Adding host "MulticastAddress1" (MulticastAddress1) to partition 0 1385642520.044 Test (3) Adding address expression "UnicastAddress1" to partition 1 1385642520.044 Test (4) Adding host "UnicastAddress1" (UnicastAddress1) to partition 1 With this configuration there is another problem: The packets and ACK are sent and received by each node networking process, but the data never reach the applications. There may be something wrong in application QoS, but with the same QoS the data reached the applications in the previous cases. MulticastAddress1 is used for resending packets. Any help would be appreciate. Rodolfo.
  5. I tried to configure the network service and provided its entry into the ospl configuration file as per the PDF documentation. But when i do that and start my publisher program it gives the following error in the log ======================================================================================== Report : ERROR Date : Thu Oct 24 14:33:11 2013 Description : dlopen error: networking: cannot open shared object file: No such file or directory Node : TMT1 Process : java <2044> Thread : spliced 7fe0bd5f0700 Internals : V6.3.130716OSS/167f283/167f283/os_libraryOpen/os_library.c/70/0/1382605391.992130176 ======================================================================================== Report : ERROR Date : Thu Oct 24 14:33:11 2013 Description : Problem opening 'networking' Node : TMT1 Process : java <2044> Thread : spliced 7fe0bd5f0700 Internals : V6.3.130716OSS/167f283/167f283/OpenSplice domain service/spliced.c/450/0/1382605391.99 My servers are using 64 Bit CentOS with a 64 bit Sun JVM. I am using the community edition of opensplice I saw that someone had posted a similar issue but there has been no response on that. Can someone from the Opensplice technical team provide inputs on why this error occurs and how to resolve this.
×