[Sugar-devel] Fwd: Google Summer of Code proposal: Speech integration in sugar(Chirag Jain)
assim.deodia at gmail.com
Wed Mar 25 14:11:09 EDT 2009
I am forwarding your mail to the new sugar devel list. It has been shifted
to sugar-devel at lists.sugarlabs.org.
Thanks for clarifying things.
---------- Forwarded message ----------
From: Hemant Goyal <goyal.hemant at gmail.com>
Date: Wed, Mar 25, 2009 at 11:17 PM
Subject: Re: Google Summer of Code proposal: Speech integration in
To: chirag jain <chiragjain1989 at gmail.com>
Cc: gsoc at lists.laptop.org, Assim Deodia <assim.deodia at gmail.com>, Simon
Schampijer <simon at schampijer.de>, Tomeu Vizoso <tomeu at tomeuvizoso.net>
Hi Chirag, and others who might be interested in integrating speech
synthesis into Sugar
You can refer to some documents in
understand the current status of the Speech Synthesis integration project.
Owing to my final year in engineering school,and other personal issues, that
kept me busy I was unable to contribute to the project after GSOC.
Here is a snapshot of the project for you:
1. I did write patches for the Configuration Module of Speech Synthesis,
however, when I last checked, the Sugar Architecture had undergone some
changes making my patches obsolete. You will have to rewrite those patches,
however, most of logic will remain fixed.
2. You will have to understand the python-dotconf API that I wrote to
modify the Configuration Files that Speech-dispatcher creates. The API has
been used in the patches and they will serve as a ready reference for you to
understand the python-dotconf API. The python-dotconf API project page can
be found here : http://code.google.com/p/python-dotconf/
3. You might have to undertake some Fedora packaging work to package the
latest release of speech-dispatcher. For this you will need to have some
elemantary knowledge of RPM package creation using SPEC files. The SPEC
files are available in the GSOC project files that I mentioned before.
4. Essentially most of the ground work to integrate speech synthesis into
Sugar/OLPC laptop is done. The work that is left to be done is to provide 2
frontend interfaces to the Speech Device [you can refer to the patches to
understand what the Speech Device is] so that Speech Settings can be
controlled through a GUI.
5. While I would not categorize the project as Advanced, I feel that you
will certainly require skills with pyGTK to design the interfaces, and some
skill to play with the Sugar Graphical User Interface - I am sure the lead
sugar devs will certainly help you out on this. erikos, tomeu, and eben in
the sugar community were very helpful. (erikos was my mentor btw)
Feel free to ask me for more detailed technical designs. Last year the
project scope became quite big for me to manage and complete within the
timeframe of GSOC. You can leverage all the work that I have done and take
this project to its success in 2 months.
On Wed, Mar 25, 2009 at 10:36 PM, chirag jain <chiragjain1989 at gmail.com>wrote:
> I am an undergraduate student from Netaji Subash Institute of
> Technology, New Delhi. I am currently working on my proposal for
> speech integration in sugar and using this as a back-end for Listen
> Spell activity.
> I have gone through the last years speech integration project. This
> project is also present in this years idea list. But as I am new to all
> this, I want a little bit of your help.
> I have gone through the idea list of sugar. They have mentioned some
> requirements. first being the configuration management tool. Secondly
> they require a UI for the config tool.
> can you please explain me the current status of this project and
> the further changes that it requires. Also what sort of technical
> things I would require to learn before attempting to this project.
> Your little contribution can be very helpful to me.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Sugar-devel