[Bugs] #3286 UNSP: (RACE CONDITION, only happens for files of small size, < ~ 10KB) No file-completion signal at the receiver

Sugar Labs Bugs bugtracker-noreply at sugarlabs.org
Thu Jan 19 02:52:30 EST 2012


#3286: (RACE CONDITION, only happens for files of small size, < ~ 10KB) No file-
completion signal at the receiver
------------------------------------------+---------------------------------
    Reporter:  ajay_garg                  |          Owner:             
        Type:  defect                     |         Status:  new        
    Priority:  Unspecified by Maintainer  |      Milestone:  0.94       
   Component:  sugar                      |        Version:  0.94.x     
    Severity:  Major                      |       Keywords:  olpc, dx3  
Distribution:                             |   Status_field:  Unconfirmed
------------------------------------------+---------------------------------

Comment(by ajay_garg):

 I need some help.

 =================================================================================
 Please find attached the following ::

 a. culprit_success.log : shell.log contents, when a 860B file
 (__init__.py) is transferred successfully - that is, the "Dismiss" option
 is seen at the receiving end.

 b. culprit_failure.log : shell.log contents, when the same 860B file
 (__init__.py) is transferred at the other end, but the "Dismiss" option is
 not seen; rather it hangs at "Cancel" at the receiving and the sending
 end.

 =================================================================================

 I have figured out the workflow disruption that happens in the case when
 the race condition is hit. It happens in

         "def __read_async_cb(self, input_stream, result)"

 of file

         "install/lib/python2.7/site-
 packages/jarabe/model/filetransfer.py".

 After the first successful "data = input_stream.read_finish(result)" that
 reads in 860B, the call never comes again to "data =
 input_stream.read_finish(result)" that should try to read in the next
 (zero) bits - which would then cause the input_stream to be closed.


 I tried simplifying the class definition of "class
 StreamSplicer(gobject.GObject)" from

 #############################################################
 class StreamSplicer(gobject.GObject):
     _CHUNK_SIZE = 10240  # 10K
     __gsignals__ = {
         'finished': (gobject.SIGNAL_RUN_FIRST,
                      gobject.TYPE_NONE,
                      ([])),
     }

     def __init__(self, input_stream, output_stream):
         gobject.GObject.__init__(self)

         self._input_stream = input_stream
         self._output_stream = output_stream
         self._pending_buffers = []

     def start(self):
         self._input_stream.read_async(self._CHUNK_SIZE,
 self.__read_async_cb,
                                       gobject.PRIORITY_LOW)

     def __read_async_cb(self, input_stream, result):
         data = input_stream.read_finish(result)

         if not data:
             logging.debug('closing input stream')
             self._input_stream.close()
         else:
             self._pending_buffers.append(data)
             self._input_stream.read_async(self._CHUNK_SIZE,
                                             self.__read_async_cb,
                                             gobject.PRIORITY_LOW)
         self._write_next_buffer()

     def __write_async_cb(self, output_stream, result, user_data):
         count_ = output_stream.write_finish(result)

         if not self._pending_buffers and \
                 not self._output_stream.has_pending() and \
                 not self._input_stream.has_pending():
             logging.debug('closing output stream')
             output_stream.close()
             self.emit('finished')
         else:
             self._write_next_buffer()

     def _write_next_buffer(self):
         if self._pending_buffers and not
 self._output_stream.has_pending():
             data = self._pending_buffers.pop(0)
             # TODO: we pass the buffer as user_data because of
             # http://bugzilla.gnome.org/show_bug.cgi?id=564102
             self._output_stream.write_async(data, self.__write_async_cb,
                                             gobject.PRIORITY_LOW,
                                             user_data=data)
 #############################################################

 to

 #############################################################
 class StreamSplicer(gobject.GObject):
     _CHUNK_SIZE = 10240  # 10K
     __gsignals__ = {
         'finished': (gobject.SIGNAL_RUN_FIRST,
                      gobject.TYPE_NONE,
                      ([])),
     }

     def __init__(self, input_stream, output_stream):
         gobject.GObject.__init__(self)

         self._input_stream = input_stream
         self._output_stream = output_stream

     def start(self):
         self._input_stream.read_async(self._CHUNK_SIZE,
 self.__read_async_cb,
                                       gobject.PRIORITY_LOW)

     def __read_async_cb(self, input_stream, result):
         data = input_stream.read_finish(result)

         if not data:
             logging.debug('closing input stream')
             self._input_stream.close()
             logging.debug('closing output stream')
             self._output_stream.close()
         else:
             self._output_stream.write_async(data, self.__write_async_cb,
                                             gobject.PRIORITY_LOW,
                                             user_data=data)

     def __write_async_cb(self, output_stream, result, user_data):
         output_stream.write_finish(result)
         self._input_stream.read_async(self._CHUNK_SIZE,
                                       self.__read_async_cb,
                                       gobject.PRIORITY_LOW)
 #############################################################

 but i still managed to get the same intermittent buggy behaviour.



 =================================================================================

 I am now trying to understand the backend-architecture of send-to-friend
 feature, and I am not able to figure out the following ::

 In the call

                 "datastore.write(self._ds_object, transfer_ownership=True,
                                 reply_handler=self.__reply_handler_cb,
                                 error_handler=self.__error_handler_cb)"

 and

                 "datastore.write(self._ds_object, update_mtime=False)"


 in "class IncomingTransferButton" in "install/lib/python/site-
 packages/jarabe/frame/activitiestray.py"

 we don't seem to pass any bytes, that could be written on the datastore. I
 have the intuition that at the time of creating the "self._ds_object" when
 the OPEN state callback is called, there is somehow a link established
 between the "self._ds_object" and the "file-transfer" channel; however, I
 am not sure, and neither am I able to "find" any code which might give a
 clue to this. (I only see "self._ds_object = datastore.create()" in "class
 IncomingTransferButton", with absolutely no hint of the linkage between
 the self._ds_object and the file-transfer-channel/socket/bytes-read-in.).


 I will be grateful, if someone could please point me to any information
 regarding the architecture of this feature (at a technical/code level;
 however, I do not need any step-by-step code explanation :) ). In
 particular, if I can know how the linkage to "datastore.write()" and the
 bytes-read-in-from-the-gio.Unix.InputStream-socket is done, the pieces
 will fit in together.

 Meanwhile, I will continue looking for any "hints in the code" to see the
 linkage between the "self._ds_object" and the bytes-read-in-from-the-
 socket.

 =================================================================================

-- 
Ticket URL: <http://bugs.sugarlabs.org/ticket/3286#comment:2>
Sugar Labs <http://sugarlabs.org/>
Sugar Labs bug tracking system


More information about the Bugs mailing list