[Systems] Disk space running out on freedom.sugarlabs.org
Stefan Unterhauser
stefan at unterhauser.name
Tue Jan 5 09:02:25 EST 2016
Dear Bernie,
I also wish you a Happy New Year ;)
On Sat, Jan 2, 2016 at 4:57 AM, Bernie Innocenti <bernie at codewiz.org> wrote:
> +stefan.unterhauser at gmail.com
>
> Dogi, could you please respond to my questions below?
>
> I'm going to reclaim some disk space on Freedom before Jan 5, and if I
> don't hear back from you I will assume it's ok to remove any partitions
> on freedom-lvm that don't belong to Sugar Labs.
>
> Just wanna remind you again that non of the four harddrives in "freedom"
is owned by Sugar Labs. The original plan was to split and empty the mix of
stuff which was on "treehouse/housetree" onto "justice" for Sugar Labs (1)
and on "freedom" stuff of OLE and Treehouse (2) as soon as possible, so
that I could move my drives back to new fixed housetree (3).
We also agreed that "freedom" would be hot-swap of "justice"
(1) happened only in parts and housetree starved with the rest when last
harddrive gave up ~18months ago.
(2) was done the first month ~3 years ago -> also did not give anybody else
then me access on "freedom"
(3) never happened because (1) did not complete - but instead 3 months
after I got somebody hacking himself with physical access and reboot on my
system on "freedom", then modifying it and allocating half of the free
space.
when I complained I did not get the expected apology, but instead I got fed
with a phrase ala "backup space on housetree is too tiny and need to make
sure that our systems are save" ;) anyway I got reassured that it will be
only the backup and that I could delete it when I would do (3). Another 3
months later the hole systems crowd (yes you guys) had access to "freedom"
- nobody ever got my personal permission. Now all the stuff which was too
experimental or heavy (like buildmachines) for "justice" was done on
"freedom" ... as long they was got behaving VMs (low or contained disk I/O)
it was not a big problem for me, it would be nice if people would keep me
in the loop ;)
Anyway I am still totally against docker on the main system and I do docker
a lot (https://hub.docker.com/r/dogi/rpi-couchdb/)
-> this year we had to the usually yearly MIT poweroutage some addtional
docker crashing the main system failures :(
Anyway I am very sorry that the original plan of divorce (split up) tanked,
we are again inter-tangled - even if I wanted I can't just remove my drives
anymore - and I think it is not completely my fault ;)
> On 12/10/2015 10:27 PM, Bernie Innocenti wrote:
> > Hello,
> >
> > once again we're running low on disk space, so I'd like to see what we
> > can recover. Here is a list of all logical volumes:
> >
> > LV VG Attr LSize
> > backup freedom-lvm -wi-ao--- 1000.00g
> > freedom-virtual freedom-lvm -wi-ao--- 500.00g
> > hammock-data freedom-lvm -wi-ao--- 150.00g
> > ole-data freedom-lvm -wi-a--- - 100.00g
> > hanginggarden-data freedom-lvm -wi-ao--- 50.00g
> > docker_extra_storage freedom-lvm -wi-ao--- 15.00g
> > socialhelp freedom-lvm -wi-ao--- 20.00g
> > hammock ole -wi-ao--- 2.73t
> >
> > The "ole" volume group lives on separate disks which presumably dogi
> > bought and installed on freedom a while ago. Dogi, could you confirm?
> >
> > On the freedom-lvm VG, the largest partition is /backup, which is 70%
> > full. I'm afraid I've already done all I could to shrink it with the
> > wizbackup changes of a few months ago, and we need the extra buffer
> > space to account for growth.
>
gave you ~200G 12 months ago
and ~300g 6 months ago for it
> >
> > Next by size is freedom-virtual, which contains a plethora of VMs owned
> > by Dogi, plus a lot of old images which we're no longer running:
> >
> > Id Name State
> > ----------------------------------------------------
> > 2 hammock running
> > 3 munin running
> > 4 hanginggarden running
> > 5 jerry running
> > 6 pirate running
> > 7 dogleash running
> > 8 rickshaw running
> > 9 chat running
> > 10 farmier running
> > 11 ole running
> > 12 kuckuck running
> > 13 dirt running
> > 14 owncloud running
> > 15 replicator running
> > 16 vote running
> > 17 beacon running
> > - buildslave-i386 shut off
> > - buildslave-x86_64 shut off
> > - docky shut off
> > - genome shut off
> > - honey shut off
> > - nicole shut off
> > - openbell shut off
> > - template-jessie shut off
> > - template-squeeze shut off
> > - template-wheezy shut off
> >
> > Dogi, could you please list all the images you're still actively using?
> >
>
replicator, vote, beacon, buildslave-* and docky are not mine
I stopped genome, honey, nicole and openbell ~2 months ago but I still need
to save the data from there
>
> > The next largest LV is hammock-data. Since the hammock Vm already has an
> > antire array of 2.7TB, perhaps it could move its data out of
> > hammock-data and free up the space?
> >
> > Same goes for ole-data, mounted by openbell and hanginggarden: do these
> > belong to OLE, Dogi? Can we move them to the OLE VG?
> >
>
busy busy this week
working on finishing up a deployment of 6 digital libraries and for a
syrian refugee camp in Jordan
http://syriabell.ole.org (runs on hammock)
https://www.globalgiving.org/projects/tiger-girls/
but I will see what I can do
>
> > For completeness, here is the df:
> >
> > Filesystem Size Used Avail Use% Mounted on
> > /dev/md0 9.1G 5.2G 3.4G 61% /
> > none 4.0K 0 4.0K 0% /sys/fs/cgroup
> > udev 24G 12K 24G 1% /dev
> > tmpfs 4.8G 12M 4.8G 1% /run
> > none 5.0M 0 5.0M 0% /run/lock
> > none 24G 0 24G 0% /run/shm
> > none 100M 0 100M 0% /run/user
> > freedom-lvm-backup 985G 647G 290G 70% /backup
> > freedom-lvm-docker_extra_storage 15G 12G 2.5G 82% /var/lib/docker
> > freedom-lvm-socialhelp 20G 12G 7.1G 62% /srv/socialhelp
> > freedom-lvm-freedom--virtual 493G 452G 16G 97%
> > /var/lib/libvirt/images
> >
>
>
> xo
dogi
PS: Just remember I am just a phone call away "617 I GO DOGI"
> --
> _ // Bernie Innocenti
> \X/ http://codewiz.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.sugarlabs.org/archive/systems/attachments/20160105/3d921007/attachment.html>
More information about the Systems
mailing list