<div dir="ltr">Dear Bernie,<div><br></div><div>I also wish you a Happy New Year ;)</div><div><br><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Jan 2, 2016 at 4:57 AM, Bernie Innocenti <span dir="ltr"><<a href="mailto:bernie@codewiz.org" target="_blank">bernie@codewiz.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">+<a href="mailto:stefan.unterhauser@gmail.com">stefan.unterhauser@gmail.com</a><br>
<br>
Dogi, could you please respond to my questions below?<br>
<br>
I'm going to reclaim some disk space on Freedom before Jan 5, and if I<br>
don't hear back from you I will assume it's ok to remove any partitions<br>
on freedom-lvm that don't belong to Sugar Labs.<br>
<div class=""><div class="h5"><br></div></div></blockquote><div>Just wanna remind you again that non of the four harddrives in "freedom" is owned by Sugar Labs. The original plan was to split and empty the mix of stuff which was on "treehouse/housetree" onto "justice" for Sugar Labs (1) and on "freedom" stuff of OLE and Treehouse (2) as soon as possible, so that I could move my drives back to new fixed housetree (3).</div><div>We also agreed that "freedom" would be hot-swap of "justice"<br></div><div><br></div><div><br class="">(1) happened only in parts and housetree starved with the rest when last harddrive gave up ~18months ago.<br></div><div><br></div><div>(2) was done the first month ~3 years ago -> also did not give anybody else then me access on "freedom"</div><div><br></div><div>(3) never happened because (1) did not complete - but instead 3 months after I got somebody hacking himself with physical access and reboot on my system on "freedom", then modifying it and allocating half of the free space.</div><div>when I complained I did not get the expected apology, but instead I got fed with a phrase ala "backup space on housetree is too tiny and need to make sure that our systems are save" ;) anyway I got reassured that it will be only the backup and that I could delete it when I would do (3). Another 3 months later the hole systems crowd (yes you guys) had access to "freedom" - nobody ever got my personal permission. Now all the stuff which was too experimental or heavy (like buildmachines) for "justice" was done on "freedom" ... as long they was got behaving VMs (low or contained disk I/O) it was not a big problem for me, it would be nice if people would keep me in the loop ;)</div><div>Anyway I am still totally against docker on the main system and I do docker a lot (<a href="https://hub.docker.com/r/dogi/rpi-couchdb/">https://hub.docker.com/r/dogi/rpi-couchdb/</a>)</div><div>-> this year we had to the usually yearly MIT poweroutage some addtional docker crashing the main system failures :(</div><div><br></div><div><br></div><div>Anyway I am very sorry that the original plan of divorce (split up) tanked, we are again inter-tangled - even if I wanted I can't just remove my drives anymore - and I think it is not completely my fault ;)</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">
On 12/10/2015 10:27 PM, Bernie Innocenti wrote:<br>
> Hello,<br>
><br>
> once again we're running low on disk space, so I'd like to see what we<br>
> can recover. Here is a list of all logical volumes:<br>
><br>
> LV VG Attr LSize<br>
> backup freedom-lvm -wi-ao--- 1000.00g<br>
> freedom-virtual freedom-lvm -wi-ao--- 500.00g<br>
> hammock-data freedom-lvm -wi-ao--- 150.00g<br>
> ole-data freedom-lvm -wi-a--- - 100.00g<br>
> hanginggarden-data freedom-lvm -wi-ao--- 50.00g<br>
> docker_extra_storage freedom-lvm -wi-ao--- 15.00g<br>
> socialhelp freedom-lvm -wi-ao--- 20.00g<br>
> hammock ole -wi-ao--- 2.73t<br>
><br>
> The "ole" volume group lives on separate disks which presumably dogi<br>
> bought and installed on freedom a while ago. Dogi, could you confirm?<br>
><br>
> On the freedom-lvm VG, the largest partition is /backup, which is 70%<br>
> full. I'm afraid I've already done all I could to shrink it with the<br>
> wizbackup changes of a few months ago, and we need the extra buffer<br>
> space to account for growth.<br></div></div></blockquote><div><br></div><div>gave you ~200G 12 months ago</div><div>and ~300g 6 months ago for it</div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">
><br>
> Next by size is freedom-virtual, which contains a plethora of VMs owned<br>
> by Dogi, plus a lot of old images which we're no longer running:<br>
><br>
> Id Name State<br>
> ----------------------------------------------------<br>
> 2 hammock running<br>
> 3 munin running<br>
> 4 hanginggarden running<br>
> 5 jerry running<br>
> 6 pirate running<br>
> 7 dogleash running<br>
> 8 rickshaw running<br>
> 9 chat running<br>
> 10 farmier running<br>
> 11 ole running<br>
> 12 kuckuck running<br>
> 13 dirt running<br>
> 14 owncloud running<br>
> 15 replicator running<br>
> 16 vote running<br>
> 17 beacon running<br>
> - buildslave-i386 shut off<br>
> - buildslave-x86_64 shut off<br>
> - docky shut off<br>
> - genome shut off<br>
> - honey shut off<br>
> - nicole shut off<br>
> - openbell shut off<br>
> - template-jessie shut off<br>
> - template-squeeze shut off<br>
> - template-wheezy shut off<br>
><br>
> Dogi, could you please list all the images you're still actively using?<br>
><br></div></div></blockquote><div>replicator, vote, beacon, buildslave-* and docky are not mine</div><div><br></div><div>I stopped genome, honey, nicole and openbell ~2 months ago but I still need to save the data from there</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">
><br>
> The next largest LV is hammock-data. Since the hammock Vm already has an<br>
> antire array of 2.7TB, perhaps it could move its data out of<br>
> hammock-data and free up the space?<br>
><br>
> Same goes for ole-data, mounted by openbell and hanginggarden: do these<br>
> belong to OLE, Dogi? Can we move them to the OLE VG?<br>
><br></div></div></blockquote><div><br></div><div>busy busy this week </div><div>working on finishing up a deployment of 6 digital libraries and for a syrian refugee camp in Jordan </div><div><a href="http://syriabell.ole.org">http://syriabell.ole.org</a> (runs on hammock)</div><div><a href="https://www.globalgiving.org/projects/tiger-girls/">https://www.globalgiving.org/projects/tiger-girls/</a></div><div><br></div><div>but I will see what I can do</div><div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">
><br>
> For completeness, here is the df:<br>
><br>
> Filesystem Size Used Avail Use% Mounted on<br>
> /dev/md0 9.1G 5.2G 3.4G 61% /<br>
> none 4.0K 0 4.0K 0% /sys/fs/cgroup<br>
> udev 24G 12K 24G 1% /dev<br>
> tmpfs 4.8G 12M 4.8G 1% /run<br>
> none 5.0M 0 5.0M 0% /run/lock<br>
> none 24G 0 24G 0% /run/shm<br>
> none 100M 0 100M 0% /run/user<br>
> freedom-lvm-backup 985G 647G 290G 70% /backup<br>
> freedom-lvm-docker_extra_storage 15G 12G 2.5G 82% /var/lib/docker<br>
> freedom-lvm-socialhelp 20G 12G 7.1G 62% /srv/socialhelp<br>
> freedom-lvm-freedom--virtual 493G 452G 16G 97%<br>
> /var/lib/libvirt/images<br>
><br>
<br>
<br></div></div></blockquote><div>xo</div><div>dogi</div><div><br></div><div>PS: Just remember I am just a phone call away "617 I GO DOGI"</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5">
--<br>
_ // Bernie Innocenti<br>
\X/ <a href="http://codewiz.org" rel="noreferrer" target="_blank">http://codewiz.org</a><br>
</div></div></blockquote></div><br></div></div></div>