<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-2" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Okay, a little holiday happened with me and then had to read through
100+ olpc emails and then realized that there is a spam filter on my
email which blocked sugar emails since july so it took some time to
catch up but here I am... :)<br>
<br>
Martin Langhoff wrote:
<blockquote
cite="mid:46a038f90908200057s216b09a9i59b4d8913a02cb8a@mail.gmail.com"
type="cite">
<pre wrap="">2009/8/19 NoiseEHC <a class="moz-txt-link-rfc2396E"
href="mailto:NoiseEHC@freemail.hu"><NoiseEHC@freemail.hu></a>:
</pre>
<blockquote type="cite">
<blockquote type="cite">
<pre wrap=""> - Automatic assessment is snake oil, Bryan is well intentioned but
deeply wrong. See the earlier email at
<a class="moz-txt-link-freetext"
href="http://www.mail-archive.com/sugar-devel@lists.sugarlabs.org/msg05584.html">http://www.mail-archive.com/sugar-devel@lists.sugarlabs.org/msg05584.html</a>
</pre>
</blockquote>
<pre wrap="">Or you are wrong.
</pre>
</blockquote>
<pre wrap=""><!---->
I may well be wrong, but to explore that you will have to talk about
what I am stating :-)
</pre>
</blockquote>
So I was wrong too... :)<br>
<blockquote
cite="mid:46a038f90908200057s216b09a9i59b4d8913a02cb8a@mail.gmail.com"
type="cite">
<pre wrap="">We could rephrase it as
- Computer-based automatic assessment / grading is only passably
accurate for a tiny *tiny* subset of relevant human skills.
</pre>
</blockquote>
I am 98% sure that automatic grading (if you mean assessment == grading
here) is absolutely not possible at all. (And it is a different issue
that as far as I know grading in class 1-4 is totally pointless and
harms children.) Most of the problems are that those tests which would
in theory differentiate children in a 1-5 grade scale are high stake
tests which are inherently unreliable.<br>
<blockquote
cite="mid:46a038f90908200057s216b09a9i59b4d8913a02cb8a@mail.gmail.com"
type="cite">
<pre wrap=""> - However, it's very spectacular, and people are drawn to it... so
much that they are drawn to it even when it *clearly does not work*
for the skill being tested. It's so easy (for the teacher) and so
flashy, that people use it regardless of whether it works.
I say this after 9 years of work in the field -- I have seen
interactive SCORM objects, Moodle quizzes & lessons, HotPotatoes
activities, LAMS assessments, lots of other standalone assessment
tools. Have worked with teachers, watching their use.
What did I see? See the 2 points above.
There is a 3rd part... because these tools are cool, easy to use, they
do a lot of damage. In large part because they replace the "I don't
know how my students are doing" with "hey, I have all these scores are
numbers... nevermind they are inaccurate and only cover about 3% of
what these kids should know".
So an inaccurate view of a tiny slice of the skillset -- but hey, we
have a number representing what this kid knows! Let's use it! The link
that follows is from the Asttle project, which I was briefly involved
in several years ago:
<a class="moz-txt-link-freetext"
href="http://www.tki.org.nz/r/governance/consider/steps/analyse_e.php">http://www.tki.org.nz/r/governance/consider/steps/analyse_e.php</a>
Even worse, the Asttle project promotes the idea that you can get a
form of 'dashboard' of what kids know. Looks like an airplane
dashboard, lots of dials, full of inaccurate data _about a tiny
subset_ of what matters.
</pre>
</blockquote>
I can feel your pain (especially seeing the Asttle project) so thanks
for the warning!<br>
<blockquote
cite="mid:46a038f90908200057s216b09a9i59b4d8913a02cb8a@mail.gmail.com"
type="cite">
<blockquote type="cite">
<pre wrap="">What more interesting is that there is some research in Hungary [1] about
the prerequisites of learning certain skills which are based on each other.
</pre>
</blockquote>
<pre wrap=""><!---->
Sure, that is interesting. Now how amenable to computer-grading are
those skills? What automated computer tests can assess them with a
decent accuracy?
</pre>
</blockquote>
It is not about computer-grading at all!!! It is about well tested
low-stake tests (children must reach 80-90%) which can reliably measure
whether a child for example reached finishing or optimal level in
reading (I am talking only about basic skills). If a child does not
reach those levels then no matter how constructivist your education
policy is in the higher grades (4-8) because this children simply will
not be able to gain any information from books. Currently teachers can
only deduce missing basic skills from the simple fact that the children
fail biology or geography tests (because they cannot read reliably) and
nobody tests reading skills continuously. The computerized assessment
cannot be worst than that can it?<br>
<blockquote
cite="mid:46a038f90908200057s216b09a9i59b4d8913a02cb8a@mail.gmail.com"
type="cite">
<pre wrap=""> </pre>
<blockquote type="cite">
<pre wrap="">... Automatizing at
least some of those tests are probably the biggest thing since sliced bread
in education in my humble opinion</pre>
</blockquote>
<pre wrap="">
Are any of those tests "automatizable"? With what accuracy? If it
turns out some can be computer assessed... _how do we keep the
non-automatizable tests in the map_? Teachers forget them
*immediately*.
</pre>
</blockquote>
<blockquote
cite="mid:46a038f90908200057s216b09a9i59b4d8913a02cb8a@mail.gmail.com"
type="cite"></blockquote>
If the school decides to run this little research then I will let you
know about the "automatizabledness" of the relevant tests, I promise! :)<br>
And I am sorry but I will not be able to solve those other social
problems (non-automatizable tests) with computers that is sure...<br>
<br>
</body>
</html>