[Sugar-devel] [Sugar] Implement text to speech in Sugar feature - v2
silbe at activitycentral.com
Mon Jan 30 14:55:13 EST 2012
Excerpts from Gonzalo Odiard's message of 2012-01-25 17:46:01 +0100:
> > Please use pep8 and pylint  to check your code prior to submission
> > [3,4].
> I have used it. If you see any problem tell me.
I saw at least one occasion of three blank lines in a row. Nothing
serious, but pep8 will complain about it. We might not really care about
this particular case, but it will add to the noise and for that reason
we should avoid it.
> > > + """
> > > + self._play_item = MenuItem('Say selected text')
> This code was commented because I proposed two different UI,
> and wanted to do easy to other testers/reviewers try the two options.
If you want to make it easier to try out different versions, you could
use if-else-blocks instead. In any case, please mark a patch that's
meant for testing rather than upstream inclusion with "RFC" (Request For
Comment) in the subject prefix.
> I will remove the option with the MenuItem beacuse gtk3 limitations (see
> Simon mail)
Which limitations and which mail exactly? I don't see any GTK3
references in this thread.
> > Why do we need a Stop button in addition to Play/Pause? And where does
> > this callback get invoked? AFAICT the hook-up is commented out.
> If you have a long text been played, and want finish or start again
> from the beginning, you need a stop button.
How about a rewind button instead? That would make it clearer what the
difference between the buttons is.
> > [extensions/globalkey/speech.py]
> > [...]
> > > +BOUND_KEYS = ['<alt>s']
> > This will collide with shortcuts from existing applications and
> > activities, especially since GTK uses <Alt> for mnemonics . To give a
> > concrete example of a non-mnemonic case: the Scan activity  uses
> > <Alt>+S to start or stop scanning .
> > Why didn't you keep the existing shortcut (<Alt><Shift>S)?
> I have changed it by Simon suggestion. Please read the other reviews.
> Scan activity? Is not in ASLO yet. You can change it ;)
Seems I wasn't clear enough: You're going to break _a_ _lot_ of existing
applications if you add a global shortcut with only <Alt> as modifier.
Anything based on GTK and using mnemonics will have a good chance of
already using <Alt>+S. (*)
We already have a shortcut for this functionality that's less likely to
be used by applications than <Alt>+S is. We simply need to keep using
it, thereby retaining compatibility with previous Sugar versions and not
breaking random applications (that use <Alt>+S) with avoidable platform
Re. reading other reviews: That's a lot easier if you set In-Reply-To to
the previous version of the patch in order to keep it all in the same
thread. Currently, I need to look up the discussion of each patch
> > Also, if we save on every change, we should listen for gconf updates so
> > that outside changes (e.g. using gconf-editor or gconftool-2) will be
> > used by Sugar instead of getting overwritten on the next invocation of
> > save().
> We are not doing this in sound or network settings, right?
Yes, that's a bug we'll need to fix. I thought I'd filed a ticket to
remind us about it, but I can't locate it right now.
> > > + if message.structure.get_name() == 'espeak-mark' and \
> > > + message.structure['mark'] == 'end':
> > > + self.emit('stop')
> > Do we need to set the pipeline state to gst.STATE_NULL like in
> > self.stop_sound_device() or does that happen automatically?
> Automatically? How?
That was my question. :-P
Do we need to explicitly set the pipeline state in __pipe_message_cb()
like we do in stop_sound_device(), or does this happen automatically
> > > + def speak(self, pitch, rate, voice_name, text):
> > Why do we need to pass the parameters in every time, rather than
> > settings the properties a) on init and b) when the gconf / UI ?
> This class does not have this variables. Should copy almost the same code in
> the SpeechManager class. I am looking if are really needed two classes or if
> I can move part of this to sugar-toolkit, to enable the activities to use
Ah, I misread src.props for self.props and thought we'd already have
them as properties on the class. I still think it'd make more sense to
set them as attributes (and copy them to the pipeline on creation), but
it's not as trivial an operation as I thought it was (only just ;) ).
> > > + if not [i for i in text if i.isalnum()]:
> > > + return
> > The user will be rather puzzled why saying some texts works and others
> > simply get ignored. We should filter out the individual characters that
> > gst-plugins-espeak doesn't like, rather than bailing out completely.
> I understand the text is ignored if there are only non alpha chars.
D'oh, of course, you're right. My fault.
[voice ordering for automatic voice selection]
> Yes. I am tempted of adding the voice selection to the UI too.
> In particular, there are 8 different english voices, two spanish and two
> and I found difficult found the right voice every time.
+1 on making it configurable through UI then.
It would have been nice to use the same settings that Gnome (Orca) does,
but it turns out they just removed the gconf settings backend [1,2] and
we certainly don't want to read and modify their custom json config file
(which is the only remaining configuration "backend" for Orca).
(*) According to an unscientific survey, there's a 10% chance a random
English word will start with 'S':
for line in open('/usr/share/dict/british-english-insane'):
character = unicode(line, 'utf-8').lower()
frequency[character] = frequency.get(character, 0) + 1
print float(frequency['s']) / sum(frequency.values())
Now multiply that with the numbers of mnemonics used within an
-------------- next part --------------
A non-text attachment was scrubbed...
Size: 490 bytes
Desc: not available
More information about the Sugar-devel