[sugar] Sophie and the XO
Thu Mar 8 15:01:18 EST 2007
On Thu, Mar 08, 2007 at 01:52:23PM -0500, Eben Eliason wrote:
> >It would be really nice if an authoring tool could import scores created
> >in TamTam or content from other applications on the machine.
> >Applications should all work well together.
> I want to second this point. The core idea behind "Collage" is that
> it would allow embedding of any content that other activities generate
> on the laptops. Of course, many activities will support well known
> media formats, but TamTam is the perfect example of a format specific
> to the laptops that it would be great to have support for. Perhaps
> some means of plugin support needs to be considered?
Hi, I'm new to this list, I'm a TamTam developer. This turned out to be a long
email about the olpc-csound-server and a plugin module for playing TamTam files.
We are presently working on TamTam's file format, so it would be a great time to
brainstorm on this sort of thing... how should TamTam interact with the journal,
with a collage activity? I think a wiki page would be a good forum for this
discussion. I have brainstormed for 20 seconds at the following location:
It is primarily of interest to TamTam developers, but if the Collage/Sophie
designers have any ideas/constraints... put'em up.
Also, the idea of providing a TamTam [plugin] API is a good one, and I think
closely related to the idea of the olpc-csound-server. I imagine that the
olpc-csound-server was intended to provide an easy way for any application to
trigger sound effects with a minimal amount of fuss, but I'd like to argue that
it can and should provide a richer more powerful interface, that has built-in
loop control functionality.
First, a little digression about our own experience in engineering TamTam. The
olpc-csound-server didn't work for us (TamTam) for three reasons:
- We wanted control over our own csound orchestra file and a global csound
instance makes this difficult.
- We could not achieve correct (timely) note playback when the csound rendering
loop was so far removed from our own generating loop... it was in a thread in
a different process, connected by network socket.
- It is inevitable that the sound buffer will under-run sometimes, and we needed
to be able to recover correctly (anticipating the case of networked TamTams).
To address these issues, we wrote a C plugin for our python app that does three
- it contains a midi-like representation of our song, as well as looping (melody
- a high-frequency (100Hz) callback checks for pending events and translates
them into csound events with correct timing, then asks csound to generate
audio data into a buffer.
- manages the alsa device independently from csound, and writes csound's output
to the sound hardware when appropriate
Since our plugin has a representation of the entire loop internally, the speed
and latency of the connection that sends messages to the plugin are no longer
especially important. In effect, we have created a different sort of
csound-server that is local to our own application, but which would continue to
serve TamTam equally well if it were a standalone daemon, like the
olpc-csound-server currently is. I'm suggesting that we could fork our internal
csound-server for the rest of the sugar applications to use, by way of providing
a TamTam-player plugin. Any other application that uses loop-based
sound-effects, such as the proposed collage tool, or any sort of mod tracker
will benefit from loop controls being built in to the csound-server. Of course,
at the same time, I'm open to suggestions for how our implementation of that
looping mechanism could work differently to support a wider variety of apps.
One catch with our approach is that (I think it is best to continue using it
with a single instance of csound as a rendering engine, and that means that) all
connecting applications have to agree on an audio sampling rate and sample
format. We are currently using 16000Hz, float samples, and we think it sounds
good and doesn't over-tax the floating-point capacity of the processor. To this
end, I think someone (Jean Piche?) should declare a sample-rate and
sample-format for the OLPC, which would facilitate sound interaction between the
activities (in terms of file sharing and simultaneous playing) without incurring
any real-time resampling (which we can't afford to do with the CPU).
PS. The current state of affairs is that csound works in floating-point. Does
our sound-card convert FP samples in hardware? Will we ever have an
integer/fixed-point csound implementation?
(lej on #sugar, #tam_tam)
More information about the Sugar-devel