[Sugar-devel] Assessment in Karma

NoiseEHC NoiseEHC at freemail.hu
Sun Aug 30 15:01:52 EDT 2009


Okay, a little holiday happened with me and then had to read through 
100+ olpc emails and then realized that there is a spam filter on my 
email which blocked sugar emails since july so it took some time to 
catch up but here I am... :)

Martin Langhoff wrote:
> 2009/8/19 NoiseEHC <NoiseEHC at freemail.hu>:
>   
>>>  - Automatic assessment is snake oil, Bryan is well intentioned but
>>> deeply wrong. See the earlier email at
>>> http://www.mail-archive.com/sugar-devel@lists.sugarlabs.org/msg05584.html
>>>       
>> Or you are wrong.
>>     
>
> I may well be wrong, but to explore that you will have to talk about
> what I am stating :-)
>   
So I was wrong too... :)
> We could rephrase it as
>
>  - Computer-based automatic assessment / grading is only passably
> accurate for a tiny *tiny* subset of relevant human skills.
>   
I am 98% sure that automatic grading (if you mean assessment == grading 
here) is absolutely not possible at all. (And it is a different issue 
that as far as I know grading in class 1-4 is totally pointless and 
harms children.) Most of the problems are that those tests which would 
in theory differentiate children in a 1-5 grade scale are high stake 
tests which are inherently unreliable.
>  - However, it's very spectacular, and people are drawn to it... so
> much that they are drawn to it even when it *clearly does not work*
> for the skill being tested. It's so easy (for the teacher) and so
> flashy, that people use it regardless of whether it works.
>
> I say this after 9 years of work in the field -- I have seen
> interactive SCORM objects, Moodle quizzes & lessons, HotPotatoes
> activities, LAMS assessments, lots of other standalone assessment
> tools. Have worked with teachers, watching their use.
>
> What did I see? See the 2 points above.
>
> There is a 3rd part... because these tools are cool, easy to use, they
> do a lot of damage. In large part because they replace the "I don't
> know how my students are doing" with "hey, I have all these scores are
> numbers... nevermind they are inaccurate and only cover about 3% of
> what these kids should know".
>
> So an inaccurate view of a tiny slice of the skillset -- but hey, we
> have a number representing what this kid knows! Let's use it! The link
> that follows is from the Asttle project, which I was briefly involved
> in several years ago:
>
>   http://www.tki.org.nz/r/governance/consider/steps/analyse_e.php
>
> Even worse, the Asttle project promotes the idea that you can get a
> form of 'dashboard' of what kids know. Looks like an airplane
> dashboard, lots of dials, full of inaccurate data _about a tiny
> subset_ of what matters.
>
>   
I can feel your pain (especially seeing the Asttle project) so thanks 
for the warning!
>> What more interesting is that there is some research in Hungary [1] about
>> the prerequisites of learning certain skills which are based on each other.
>>     
>
> Sure, that is interesting. Now how amenable to computer-grading are
> those skills? What automated computer tests can assess them with a
> decent accuracy?
>   
It is not about computer-grading at all!!! It is about well tested 
low-stake tests (children must reach 80-90%) which can reliably measure 
whether a child for example reached finishing or optimal level in 
reading (I am talking only about basic skills). If a child does not 
reach those levels then no matter how constructivist your education 
policy is in the higher grades (4-8) because this children simply will 
not be able to gain any information from books. Currently teachers can 
only deduce missing basic skills from the simple fact that the children 
fail biology or geography tests (because they cannot read reliably) and 
nobody tests reading skills continuously. The computerized assessment 
cannot be worst than that can it?
>   
>> ... Automatizing at
>> least some of those tests are probably the biggest thing since sliced bread
>> in education in my humble opinion
> Are any of those tests "automatizable"? With what accuracy? If it
> turns out some can be computer assessed... _how do we keep the
> non-automatizable tests in the map_? Teachers forget them
> *immediately*.
>
>   
If the school decides to run this little research then I will let you 
know about the "automatizabledness" of the relevant tests, I promise! :)
And I am sorry but I will not be able to solve those other social 
problems (non-automatizable tests) with computers that is sure...

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.sugarlabs.org/archive/sugar-devel/attachments/20090830/0faeda49/attachment.htm 


More information about the Sugar-devel mailing list