Jump to content
OpenSplice DDS Forum

JimHayes

Members
  • Content count

    8
  • Joined

  • Last visited

About JimHayes

  • Rank
    Member

Profile Information

  • Company
    Concentris Systems

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. JimHayes

    Memory Grown (take 2)

    I've submitted this issue to bugzilla: bug #48
  2. JimHayes

    Memory Grown (take 2)

    I've found that this issue is not solely related to octet sequences. I've tried replacing the octet sequence with a string and then a char sequence and then a long sequence and have had the same problem with each. Is the example application missing some important step? Jim
  3. JimHayes

    Memory Grown (take 2)

    Greetings, This issue may appear to be similar to the recent 'Memory Growth' topic from user 'michael', but I think it is different. Like Michael, I too am noticing memory growth that appears to be related to the use of an octet sequence. However, I am using a single instance and a single topic. I suspect that there is a memory leak related to the repeated writing of an object instance, where the size of the octet sequence changes between writes. In other words, the instance is written, then the size of the sequence is changed, then the instance is written again. I wrote a sample application that I believe exposes this issue. The sample application makes use of a simple IDL struct that is just a unique identifier and an octet sequence. struct Simple { unsigned long long id; sequence<octet> payload; }; #pragma keylist Simple id The application creates one instance and repeatedly writes it to a topic. Between each write, the size of the octet sequence is modified. Pseudo code: (the real code is further down in this post) Simple simple = new Simple(); simple.id = 1; for (int i = 0; i < 3000; i++) { simple.octet_sequence = new byte[random size]; dataWriter.write(simple, handle); pause a bit } mmstat produces the following output while the app runs. Note the 'reusable' and 'fails' columns: 05/05/2011 available count used maxUsed reusable fails 11:12:39 9.238.736 12330 1.099.416 1.099.416 39.400 0 11:12:49 9.134.464 13215 1.203.688 1.232.024 215.760 0 11:12:59 9.134.464 13215 1.203.688 1.232.536 949.568 0 11:13:09 9.134.464 13215 1.203.688 1.232.536 1.650.824 0 11:13:19 9.134.464 13215 1.203.688 1.232.544 2.409.544 0 11:13:29 9.134.464 13215 1.203.688 1.232.544 3.082.432 0 11:13:39 9.134.464 13215 1.203.688 1.232.544 3.711.608 0 11:13:49 9.134.464 13215 1.203.688 1.232.544 4.367.064 0 11:13:59 9.134.464 13215 1.203.688 1.232.544 4.910.648 0 11:14:09 9.134.464 13215 1.203.688 1.232.544 5.550.792 0 11:14:19 9.134.464 13215 1.203.688 1.232.544 6.077.800 0 11:14:29 9.134.464 13215 1.203.688 1.232.544 6.626.512 0 11:14:39 9.134.464 13215 1.203.688 1.232.544 7.049.936 0 11:14:49 9.134.464 13215 1.203.688 1.232.544 7.523.240 0 11:14:59 9.134.464 13215 1.203.688 1.232.544 7.956.568 0 11:15:09 9.134.464 13215 1.203.688 1.232.544 8.358.672 0 11:15:19 9.134.464 13215 1.203.688 1.232.544 8.756.328 0 11:15:29 9.134.360 13216 1.203.792 1.232.544 9.132.480 2 Details: OpenSplice: Community Edition 5.4.1 (visual studio 2008) using the default configuration. O/S: Windows 7 (64-bit) I am using the java api running under JDK 1.6.0_23 (32-bit) (also reproduced in JDK 1.5.0_22). Test Description: I create one instance of the object and write it to a topic over and over again. Between writes, I assign a new octet sequence to the instance. The key to the memory leak is that the size of the octet sequence is changed each time the instance is written. If we were to assign a new octet sequence each time, but keep the size the same, then there would be no leak. The Code: public class OctetSequenceMemLeak { /** * Toggle CRASH_JVM to between true/false to see the app crash or not when it is run. */ public static final boolean CRASH_JVM = true; public static void main(String[] args) { String topicName = "Topic_Name"; String partitionName = "Partition_Name"; DomainParticipantFactory participantFactory = DomainParticipantFactory.get_instance(); DomainParticipant participant = participantFactory.create_participant(null, PARTICIPANT_QOS_DEFAULT.value, null, STATUS_MASK_NONE.value); ////////////////////////////////////////////////////////////////////////////// // Set up the quality of service ////////////////////////////////////////////////////////////////////////////// TopicQosHolder topicQosHolder = new TopicQosHolder(); participant.get_default_topic_qos(topicQosHolder); TopicQos topicQos = topicQosHolder.value; topicQos.durability.kind = DurabilityQosPolicyKind.VOLATILE_DURABILITY_QOS; topicQos.durability_service.max_instances = 1; topicQos.durability_service.max_samples = 1; topicQos.durability_service.max_samples_per_instance = 1; topicQos.durability_service.service_cleanup_delay = new Duration_t(1, 0); topicQos.history.kind = HistoryQosPolicyKind.KEEP_LAST_HISTORY_QOS; topicQos.history.depth = 1; topicQos.resource_limits.max_instances = 1; topicQos.resource_limits.max_samples = 1; topicQos.resource_limits.max_samples_per_instance = 1; ////////////////////////////////////////////////////////////////////////////// // Create the topic ////////////////////////////////////////////////////////////////////////////// SimpleTypeSupport typeSupport = new SimpleTypeSupport(); typeSupport.register_type(participant, typeSupport.get_type_name()); Topic topic = participant.create_topic(topicName, typeSupport.get_type_name(), topicQos, null, STATUS_MASK_NONE.value); ////////////////////////////////////////////////////////////////////////////// // Create the publisher and data writer ////////////////////////////////////////////////////////////////////////////// PublisherQosHolder pQos = new PublisherQosHolder(); participant.get_default_publisher_qos(pQos); pQos.value.partition.name = new String[] {partitionName}; Publisher publisher = participant.create_publisher(pQos.value, null, STATUS_MASK_NONE.value); DataWriter dw = publisher.create_datawriter(topic, DATAWRITER_QOS_USE_TOPIC_QOS.value, null, STATUS_MASK_NONE.value); SimpleDataWriter dataWriter = SimpleDataWriterHelper.narrow(dw); DataWriterQosHolder dataWriterQosHolder = new DataWriterQosHolder(); dataWriter.get_qos(dataWriterQosHolder); DataWriterQos dataWriterQos = dataWriterQosHolder.value; dataWriterQos.writer_data_lifecycle.autodispose_unregistered_instances = true; dataWriterQos.writer_data_lifecycle.autopurge_suspended_samples_delay = new Duration_t(1, 0); dataWriterQos.writer_data_lifecycle.autounregister_instance_delay = new Duration_t(1, 0); dataWriter.set_qos(dataWriterQos); ////////////////////////////////////////////////////////////////////////////// // Create the subscriber and data reader. ////////////////////////////////////////////////////////////////////////////// SubscriberQosHolder sQos = new SubscriberQosHolder(); participant.get_default_subscriber_qos(sQos); sQos.value.partition.name = new String[] {partitionName}; Subscriber subscriber = participant.create_subscriber(sQos.value, null, STATUS_MASK_NONE.value); DataReader dr = subscriber.create_datareader(topic, DATAREADER_QOS_USE_TOPIC_QOS.value, null, STATUS_MASK_NONE.value); SimpleDataReader dataReader = SimpleDataReaderHelper.narrow(dr); ////////////////////////////////////////////////////////////////////////////// // Add a listener to the data reader. ////////////////////////////////////////////////////////////////////////////// DataReaderListener dataReaderListener = createDataReaderListener(); dataReader.set_listener(dataReaderListener, DATA_AVAILABLE_STATUS.value); ////////////////////////////////////////////////////////////////////////////// // Create an instance of the IDL typed object and cause OpenSplice to run out // of memory by updating the size of the instance's byte array each time we // write it. ////////////////////////////////////////////////////////////////////////////// Random random = new Random(1); Simple instance = new Simple(); instance.id = 1; long handle = dataWriter.register_instance(instance); int numWrites = 4096; for (int i = 0; i < numWrites; i++) { // We are going to create a new byte array each time we write the sample. // If the size of the byte array changes, OSPL slowly runs out of memory with each write. if (CRASH_JVM) { // The JVM will crash eventually if we change the size of the array each time. // The error-log will report something like: // "Memory claim denied: required size (13216) exceeds available resources (14824)!" instance.payload = new byte[random.nextInt(16384)]; } else { // OSPL is fine if we create a new byte array of the same size each time. instance.payload = new byte[16384]; } System.out.println("Write id: " + instance.id + " byte length: " + instance.payload.length + " (" + (i + 1) + "/" + numWrites + ")"); dataWriter.write(instance, handle); // Sleep for a bit try { Thread.sleep(100); } catch (InterruptedException ignore) { } } ////////////////////////////////////////////////////////////////////////////// // Cleanup ////////////////////////////////////////////////////////////////////////////// dataWriter.dispose(instance, handle); dataWriter.unregister_instance(instance, handle); dataReader.set_listener(null, DATA_AVAILABLE_STATUS.value); publisher.delete_datawriter(dataWriter); subscriber.delete_datareader(dataReader); participant.delete_publisher(publisher); participant.delete_subscriber(subscriber); participant.delete_topic(topic); participant.delete_contained_entities(); participantFactory.delete_participant(participant); } public static DataReaderListener createDataReaderListener() { return new DataReaderListener() { public void on_data_available(DataReader dataReader) { SimpleDataReader reader = SimpleDataReaderHelper.narrow(dataReader); SimpleSeqHolder seqHolder = new SimpleSeqHolder(); SampleInfoSeqHolder infoSeqHolder = new SampleInfoSeqHolder(); reader.take(seqHolder, infoSeqHolder, LENGTH_UNLIMITED.value, ANY_SAMPLE_STATE.value, ANY_VIEW_STATE.value, ANY_INSTANCE_STATE.value); for (int i = 0; i < seqHolder.value.length; i++) { if (infoSeqHolder.value[i].valid_data) { System.out.println("Read id: " + seqHolder.value[i].id + " byte length: " + seqHolder.value[i].payload.length); } } reader.return_loan(seqHolder, infoSeqHolder); } public void on_requested_deadline_missed(DataReader dataReader, RequestedDeadlineMissedStatus status) { } public void on_requested_incompatible_qos(DataReader dataReader, RequestedIncompatibleQosStatus status) { } public void on_sample_rejected(DataReader dataReader, SampleRejectedStatus status) { } public void on_liveliness_changed(DataReader dataReader, LivelinessChangedStatus status) { } public void on_subscription_matched(DataReader dataReader, SubscriptionMatchedStatus status) { } public void on_sample_lost(DataReader dataReader, SampleLostStatus status) { } }; } } Thanks in advance for your time, Jim
  4. JimHayes

    Best Practices for Memory Management when using Java

    Hi Emiel, Thanks for getting back to me so quickly. First some clarification: I don't always create a datawriter so that I can write a single object, I just used that as an example to demonstrate my suspicion that treating DataWriters (or DataReaders) as regular java objects can cause a memory leak if the reference to the DataWriter goes out of scope before the DataWriter is explicitly deleted from the publisher. When this happens, you no longer have a reference to the DataWriter, so you cannot tell the publisher to delete the DataWriter, and the garbage collector will never reclaim that memory since JNI is pointing at it. I placed my example in a method with the intention of showing a DataWriter go out of scope, but I think I ended up adding confusion instead. :-( Additionally, my previous example only referenced a single topic because I wanted to keep the example as simple as possible. I had no intention of giving the impression that I am designing an application with one single monolithic topic used to transport all the data in my app. About 'A', 'B', 'C' and 'D': 'A' is a java object that has a reference to a DataWriter and is responsible for publishing data on a particular topic/partition. 'B' is a java object that has a reference to a DataReader and is responsible for listening to data being published to a particular topic/partition (the same one used by 'A'). 'A' and 'B' (and C and D) are objects participating in a distributed application that is making use of DDS to transport data between the various 'nodes' of the application. Back to my issues with memory management: Let's say that when 'A' is instantiated, it creates the DataWriter instance that it is going to use to publish data. Let's say that object 'A' keeps its reference to this DataWriter for the duration of its lifetime. Every time 'A' needs to publish data, it reuses this same DataWriter instance. At some point, there will be no references to 'A' and the java garbage collector will decide to reclaim the memory used by 'A'. I believe this is where a potential pitfall lies for Java programmers. The DataWriter owned by 'A' needs to be explicitly deleted from the publisher before 'A' is garbage collected. If publisher.delete_datawriter(datawriter) is not invoked at this point, then the garbage collector will never release the resources associated with this DataWriter. I think I need to work on the clarity of my questions in addition to my understanding of DDS :-) Thanks again, Jim
  5. JimHayes

    Best Practices for Memory Management when using Java

    Emiel, thank you for your response. I appreciate your feedback. I'm sure there is some fundamental concept that I have failed to grasp. I wonder if I could provide a quick example of what I am doing, and maybe this would expose a flaw in my approach: Let's say that I have a topic for some object, let's call it 'Example'. And lets say that the following relationships exist: Object 'A' is publishing 'Example' objects, and object 'B' needs to read the 'Example' objects published by 'A'. Object 'C' is publishing 'Example' objects, and object 'D' needs to read the 'Example' objects published by 'C'. A ---> B C ---> D I want to ensure that object 'D' does not read in objects published by object 'A' and object 'B' does not read in objects published by 'c'. In order to to this I have been using partitions: The communication between 'A' and 'B' would be on partition "A to B". The communication between 'C' and 'D' would be on partition "C to D". This results in 2 data readers and 2 data writers: Object 'A' uses a DataWriter for partition "A to B". Object 'B' uses a DataReader for partition "A to B". Object 'C' uses a DataWriter for partition "C to D". Object 'D' uses a DataReader for partition "C to D". In your response to me, you stated that one would commonly create one datawriter for a type of objects. If I were to do this, and have one ExampleDataWriter (used by 'A' and 'C' in this example), and one ExampleDataReader (used by 'B' and 'D'), how could I ensure that object 'B' only read in objects published by 'A' and object 'D' only read in objects published by 'C'? So this is why I have ended up with lots of data reader and writer instances per object type, because the application that I am impleneting has many pairs of objects like 'A and B' and 'C and D'. Does this seem like a flawed approach? Thank you for your time (and patience)! Jim
  6. Greetings, I'm using the OpenSplice Java api, and I suspect that managing memory is not quite as simple as the OpenSplice Java Reference Guide suggests. The 'Memory Management' section of the reference says the following: I think the reality is more complex. If you rely on the garbage collector to clean up your data readers and writers, you will end up leaking memory. Suppose you wanted to write one instance to a topic and you chose to implement that in the following way: public void writeMyObject() { MyObject myObject = [create and populate the object]; MyObjectDataWriter dataWriter = [create data writer]; dataWriter.write(myObject, HANDLE_NIL.value); } After the completion of the method, the dataWriter instance will be out of scope, and if it were a normal java object it would be garbage collected. But it is not a normal java object because it is associated through JNI with a native data writer. JNI will maintain a reference to the java instance as long as the native instance exists (which will be forever since we didn't delete the writer). This JNI reference will prevent the java garbage collector from freeing the memory associated with the data writer. If you run code similar to the above example and then run a heap analyzer (like 'visualvm'), you will see that there are data readers and writers existing in memory that are never going away because JNI is pointing at them. So clearly I need to be deleting the data writer explicitly. I have tried deleting the data writer immediately after writing the data, but when I do this the reader never receives the data. It looks like I need to write the data, wait for enough time to elapse so that the reader gets a chance to read the data, and then delete the data writer. I have found myself writing data like the following example, where I write some data and then schedule the data writer and instance for cleanup after a short wait: public void writeMyObject() { MyObject myObject = [create and populate the object]; MyObjectDataWriter dataWriter = [create data writer]; dataWriter.write(myObject, HANDLE_NIL.value); // schedule the data writer for deletion and the instance for disposal. scheduleForDeletion(dataWriter); scheduleForDisposal(myObject); } This seems wrong to me and I'm sure that I am misunderstanding how I should be managing my resources. So what is the proper way to read and write data, using the Java api, such that we do not leak memory? Thanks in advance, Jim
  7. JimHayes

    Topic deletion and the builtin DCPSTopic topic

    Thanks for your quick reply Hans. A quick follow up question, if I may: Given the persistent nature of topics in DDS, should I be less cavalier about creating a large number of new topics rather than reusing existing topics? Something that I have been doing was creating numerous 'one time use' topics that I was hoping to create, write/read data, and then remove. This now seems like an approach that will eventually cause memory issues. Is there a document describing 'best practices' to handling topics? Thanks, Jim
  8. Greetings, I have recently been playing around with the built in topics: DCPSTopic, DCPSSubscription and DCPSPublication. I wrote a simple program that listens to DataReaders connected to those topics and prints to stdout whenever a subscription, publisher or topic is created or deleted. This works great for subscriptions and publishers, but I never see topics being deleted while listening to the DCPSTopic (I do see them get added though). These topics are being deleted via a call to domainParticipant.delete_topic(topic) (and the return code is RETCODE_OK). Is there a way to delete the topic such that you can hear about it on the DCPSTopic topic? My first thought was that the topics aren't being deleted because they have readers or writers attached to them, but if that were the case, delete_topic would have returned RETCODE_PRECONDITION_NOT_MET. What could be some other causes for topics never being deleted? Thanks! Jim
×