Archive for the ‘security’ Category

what does ‘cloud’ stand for?

simply speaking, a cloud consists similar computers (homogeneous hardware). usually every single cloud computer runs the same OS (host system), each controlling various guests. the main technical motivations are:

  • load balancing of cpu load (move the VM to a machine with more CPU power)
  • load balancing of input/ouput load (RAM increase; faster storage raid; in memory databases)
  • load balancing of bandwidth usage (move the VM to the most demanding users)
  • increase redundancy (reduce hardware failures; reduce power loss issues)
the main non-technical motivations seem to be:
  • marketing – ‘cloud’ sounds cool, although a ‘cloud’ is basically just a subset of the internet
  • vendor lock-in – probably no surprise to anyone
  • centralization – cheaper to manage; grants more control over the platform

if you are a customer to a service which is hosted in the cloud you usually don’t see the cloud at all, hence the term ‘cloud’:

  • amazon: when buying books or other things
  • google: using google search; reading email; google maps
  • microsoft azure: whatever that platform is good for, is actually anybody using this?
if you are using one or more machines from a cloud, there are basically two interesting patterns:
  • actively maintain each computer: implement distributed file systems and distributed services
  • using an abstraction: someone implemented the handling of nodes and services you are going to develop thus it is running on top of an abstraction
to sum up, if you want to use cloud computers, you have to decide between:
  • SaaS – Software as a Service (something like google mail)
  • PaaS – Platform as a Service (something like ms azure / amazon e2)
as the trend in hardware design is going towards multicore along with NUMA it seems the cloud is undergoing similar changes. as a rule of thumb i’d say that ‘cloud computing can be seen as an approach to build a distributed operating system‘.

cloud problems

not too long ago you would have maintained your own infrastructure with access to the hardware and software used. but times have changed and the americanization of things, that is by building ‘super services’, is about to change the internet yet again.

i see this issues (no special order):

loss of control

this is probably the strongest argument against using third party proprietary services as you can’t fix it when it is broken. but cloud computing usually means a loss of privacy as well. the article [2] mentioning various points from richard stallman and larry ellison probably makes this point clear. it is interesting to see this SaaS wikipedia article [3] which reads like a campaign for SaaS – probably written by someone with a marketing background. there is the dangers to loose your data to foreign countries, as mentioned in [7].

loss of own infrastructure

you don’t have your own infrastructure anymore, thus you don’t have physical control over your devices. additionally you then depend on working internet connections. it is likely that the infrastructure you rely on runs in one or several different countries.

loss of software not designed for the cloud

the various versions of the GPL had a great influence on how software could be used and distributed but with the advent of the cloud this changes drastically. the way programs, especially webservices, are designed makes the GPL concept useless as it does not affect you at all. however, there is a new license, the ‘Affero General Public License’ [4] which fills that gap.

why is wordpress is not licensed AGPL i wonder? my first guess is laziness as every author of every single patch would have to be asked for license change persmission. but the wordpress hosters could be using the GPL to greenwash their software as they would not have to hand out proprietary extension which might not be released. but who knows?!

loss of knowledge how to setup services similar to today’s cloud servcies

think about email – who operates his own mailserver nowadays? most friends of mine use google mail and this implies: once you are familiar with a service and its workflow you usually do not want to change. especially if the service seems to be free as in google mail for example (but most of my friends seem not to care that google replaced ‘currency’ by ‘privacy’ which is used as payment instead).

as a consequence the knowledge about how to run your own mail server gets lost. if you understand german, listen to alternativlos 18 – ‘Peak Oil, den Weltuntergang, und wie man sich vorbereiten kann’ [5] minute 74 ff – they discuss this issue.

my personal experience

i have a strong tendency to use devices which are capable of bringing me certain services offline. this is why i put a lot of effort into the evopedia application for instance. the nokia n900 is probably another good example where i try to maintain an offline infrastructure – i didn’t even have mobile internet on the n900 for a complete year and yet i was able to do most things using sip/mappero/evopedia and others.

here are some thoughts about online services i use:


i use wordpress.com right now and i really hate it for these points:

  • you can’t write offline
  • initial uploading images or updating them is a frustrating process
  • i sometimes loose parts of articles while writing
  • there is no good backup process for offline backups
  • i hate the WYSIWYG editor as it does not work very well
  • wordpress is inconsitent in producing a good web 2.0 workflow, it feels like reloading the page all the time instead of doing so for single dom-tree elements only, as it would be done with web 2.0; if you don’t trust me, have a look at [6] – how the upcoming wikipedia editor works
of course i could host wordpress on my own webserver and i wanted to do that for a long time. the problem is that wordpress is optimized to be run on wordpress.com thus i think it might be too much work for me to support it with proper security updates and plugin management. instead i search for a blog system which uses markdown in combination to git but i didn’t find yet what i am searching for.
don’t get me wrong, i really like wordpress but i don’t like this dependency and lack of flexibility using their software.

google mail/docs

i really love ‘google docs’ as it is a wonderful collaborative platform but i can’t use it as i have to disclose all documents to google i’d be working on.

google android

like google mail and google docs, android has a very good cloud integration. but if you want to use services other than google’s, it is a horrible platform. for instance i keep installing xabber [12] although google uses jabber but intentionally made you require to install third party software in order to use non google jabber. same goes for most other services. if i had to use an android phone i would buy one with proper CyanogenMod [13] support.


great service for source code hosting using git. still the platform itself is not available like for http://gitorious.org/ or  http://gitlabhq.com/. github.com uses a wiki which is bound to the platform and not contained in the git repo.

note: although i never used http://www.fossil-scm.org i like the idea that it contains a wiki in the repository as well

i use github.com only for free and open source projects.

better without clouds

the conventional use of the term ‘cloud’ simply indicates a buzzword or business term for vendor lock-in and centralized infrastructure you don’t have control of. that is good to know as it helps to recognize and avoid such services. what one should use instead is decentralized infrastructure located near the user, connected to the internet where needed, giving the user the control over the platform.

arguably this concept is implemented as a new trend called ‘personal cloud‘ or ‘private cloud server‘. but these terms are limiting the trend to personal or private matters, yet i would like to see it in businesses as well.


following the concept of decentralization users can host their own files and other things as address books / calenders on their own home devices.

a list of interesting devices to give you an idea:

  • sheevaplug [9] – there is even a nixos version for this device (by viric)!
  • pogoplug [10]
  • tonidoplug [11]
  • fritz!box (with myfritz and fritznas) [14]


software implementing services

a list of software i find interesting:

  • despora [15] – decentralized facebook
  • owncloud [16] – dropbox like service
  • sparkleshare [21] – is a collaboration and sharing tool that is designed to keep things simple and to stay out of your way.
  • tomahawk [19] – a nice music streaming service
  • various p2p / torrent like services:
    • mldonkey [17]

still most ‘personal or private clouds’ scale differently compared to the big 3 mentioned in the beginning of this article. for instance, most of these services are configured in the client/server way and they usually do not implement concepts as failover, backups or load balancing. for that to happen it requires a new set of tools and decentralized frameworks based on p2p technologies – which has just not happened yet.

there is also a political issue: most internet users do not have a decent upload channel, which basically means that their internet connection is not very good.

software for managing services

  • openshift [20] – is a cloud computing platform as a service product from red hat
  • openstack [21] – is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds.
  • disnix [8] – is a distributed deployment extension for Nix, a purely functional package manager.
i’ve used neither but i like to point out that there is ongoing open source involvement and interestingly non of these technologies are used in private clouds. private clouds seem to implement the classical client/server paradigm at the moment. there is a remarkable exception, that is filesharing using p2p/kademlia which implements a basically read only storage which scales pretty well already.

a matter of design

to make the private cloud or a decentralized cloud a success we need:

  • a standardized package manager with proper software life-cycle management
  • symmetrical internet connections with decent upload/download speeds
  • transparent support for scalability/reliability/redundancy (the points mentioned in the beginning of the article)
  • powerful hardware with low power usage but capable of high loads
  • encryption and certificates or a chain of trust
  • ipv6 – we need good endpoint communication capabilities
  • a clear understanding of where we want to put our personal data and how we can protect it
i think each requirement on its own is already implemented somewhere but not in combination to each. there is not yet a library providing the software/protocol requirements and the hardware is either not powerful enough or is not intended to be used in that way required.


still it is a long way for the private clouds to have the same level of features/quality as the big clouds already have. for the time being it seems to be complicated for the average internet user to use the internet without loosing too much of his individuality, thus the freedom of expression.


[1] http://www.google.de/search?sourceid=chrome&ie=UTF-8&q=richard+stallman+cloud

[2] http://www.guardian.co.uk/technology/2008/sep/29/cloud.computing.richard.stallman

[3] http://de.wikipedia.org/wiki/Software_as_a_Service

[4] http://en.wikipedia.org/wiki/Affero_General_Public_License

[5] http://alternativlos.org/18/

[6] https://www.mediawiki.org/wiki/VisualEditor:InezSandbox

[7] http://www.engadget.com/2011/06/30/microsoft-european-cloud-data-may-not-be-immune-to-the-patriot/

[8] http://nixos.org/disnix/

[9] http://de.wikipedia.org/wiki/SheevaPlug

[10] http://pogoplug.com/

[11] http://en.wikipedia.org/wiki/Tonido

[12] http://www.xabber.com/

[13] http://www.cyanogenmod.com/

[14] https://www.myfritz.net/was_ist_myfritz.xhtml

[15] http://de.wikipedia.org/wiki/Diaspora_(Software)

[16] http://de.wikipedia.org/wiki/Owncloud

[17] http://de.wikipedia.org/wiki/Mldonkey

[18] http://trac.edgewall.org/

[19] http://www.tomahawk-player.org/

[20] http://en.wikipedia.org/wiki/OpenShift

[21] http://openstack.org/

[22] http://sparkleshare.org/


Read Full Post »

i just finished listening to “Episode 176: Quantum Computing” [1] and this is really a great podcast. like the whole SE-Radio btw!

this podcast really inspired me and on the way back from work, i was thinking about the possibility to exploit software using quantum computing.

quantum cracking that is. it would work like this: assume you have a program or function which gets input. the ultimate goal is to find some input which will crash the program. using a quantum computer this is probably not that hard to compute.
i could imagine that quantum computing could also be used for software verification, which is actually quite the opposite of what quantum cracking would be.

so when quantum computers arrive we do not only lose AES/RSA but our computers will be open to everyone with such a system. hopefully such systems spread soon, which might compensate the negative effect, maybe with quantum cryptography.

but as martin laforest says: at the end of the day i still don’t know when this technique will arrive. but when it arrives it will turn security upside down.

the most promising aspect of quantum computing, which is mentioned in the podcast, is that it will enable detailed quantum research which i consider a very cool thing as it will help to understand what goes down there.



Read Full Post »

how to integrate a daemon into nixos

in this short article i want to show how services can be added/used in nixos. i personally find the nixos way a quite easy and ‘clean’ approach!

as i integrated cntlm my goals were:

  • easy to use
  • password must be stored in a safe place
  • service should not be run as root but as a special user: cntlm
it helped a lot to look at similar scripts like the sshd integration but i also used “NixOS: A Purely functional Linux Distribution” [1] which is describing most aspects which are needed to get a service up and running.
seen as a developer one has to write two nix expressions:
  • cntlm/default.nix
    the script which describes the software cntlm
  • nixos/modules/services/networking/cntlm.nix
    the script which descirbes the service
seen as a user one has to modify only one nix expression:
  • /etc/nixos/configuration.nix
    the place where all the configuration happens


this nix expression describes the cntlm software and is quite simple as it basically fetches the software/compiles it/installs it into the system. however, a user does not have to install this software using:

  • nix-env -i cntlm
still this would be possible, and a normal user/root user could use the software in his profile this way.
anyway here is the script:
     1  { stdenv, fetchurl, which}:
     3  stdenv.mkDerivation {
     4    name = "cntlm-0.35.1";
     6    src = fetchurl {
     7      url = mirror://sourceforge/cntlm/cntlm-0.35.1.tar.gz;
     8      sha256 = "7b3fb7184e72cc3f1743bb8e503a5305e96458bc630a7e1ebfc9f3c07ffa6c5e";
     9    };
    11    buildInputs = [ which ];
    13    installPhase = ''
    14      ensureDir $out/bin; cp cntlm $out/bin/;
    15      ensureDir $out/share/; cp COPYRIGHT README VERSION doc/cntlm.conf $out/share/;
    16      ensureDir $out/man/; cp doc/cntlm.1 $out/man/;
    17    '';
    19    meta = {
    20      description = "Cntlm is an NTLM/NTLMv2 authenticating HTTP proxy";
    21      homepage = http://cntlm.sourceforge.net/;
    22      license = stdenv.lib.licenses.gpl2;
    23      maintainers = [ stdenv.lib.maintainers.qknight ];
    24    };
    25  }

the only point of interest might be the buildInputs (line 11) which includes which. in this build script ‘which gcc’ is used to test if gcc is installed.


this expression is used to integrate cntlm as a system service.

     1  { config, pkgs, ... }:
     3  with pkgs.lib;
     5  let
     7    cfg = config.services.cntlm;
     8    uid = config.ids.uids.cntlm;
    10  in
    12  {
    14    options = {
    16      services.cntlm= {
    18        enable = mkOption {
    19          default = false;
    20          description = ''
    21            Whether to enable the cntlm, which start a local proxy.
    22          '';
    23        };
    25        username = mkOption {
    26          description = ''
    27            Proxy account name, without the possibility to include domain name ('at' sign is interpreted literally).
    28          '';
    29        };
    31        domain = mkOption {
    32          description = ''Proxy account domain/workgroup name.'';
    33        };
    35        password = mkOption {
    36          default = "/etc/cntlm.password";
    37          type = with pkgs.lib.types; string;
    38          description = ''Proxy account password. Note: use chmod 0600 on /etc/cntlm.password for security.'';
    39        };
    41        netbios_hostname = mkOption {
    42          default = config.networking.hostName;
    43          description = ''
    44            The hostname of your workstation.
    45          '';
    46        };
    48        proxy = mkOption {
    49          description = ''
    50            A list of NTLM/NTLMv2 authenticating HTTP proxies.
    52            Parent proxy, which requires authentication. The same as proxy on the command-line, can be used more than  once  to  specify  unlimited
    53            number  of  proxies.  Should  one proxy fail, cntlm automatically moves on to the next one. The connect request fails only if the whole
    54            list of proxies is scanned and (for each request) and found to be invalid. Command-line takes precedence over the configuration file.
    55          '';
    56        };
    58        port = mkOption {
    59          default = [3128];
    60          description = "Specifies on which ports the cntlm daemon listens.";
    61        };
    63       extraConfig = mkOption {
    64          default = "";
    65          description = "Verbatim contents of cntlm.conf.";
    66       };
    68      };
    70    };
    73    ###### implementation
    75    config = mkIf config.services.cntlm.enable {
    76      users.extraUsers = singleton {
    77          name = "cntlm";
    78          description = "cntlm system-wide daemon";
    79          home = "/var/empty";
    80      };
    82      jobs.cntlm = {
    83          description = "cntlm is an NTLM / NTLM Session Response / NTLMv2 authenticating HTTP proxy.";
    84          startOn = "started network-interfaces";
    85          environment = {
    86          };
    88      preStart = '' '';
    90      daemonType = "fork";
    92      exec =
    93        ''
    94          ${pkgs.cntlm}/bin/cntlm -U cntlm \
    95          -c ${pkgs.writeText "cntlm_config" cfg.extraConfig}
    96        '';
    97      };
    99      services.cntlm.extraConfig =
   100        ''
   101          # Cntlm Authentication Proxy Configuration
   102          Username        ${cfg.username}
   103          Domain          ${cfg.domain}
   104          Password        ${cfg.password}
   105          Workstation     ${cfg.netbios_hostname}
   106          ${concatMapStrings (entry: "Proxy ${entry}\n") cfg.proxy}
   108          ${concatMapStrings (port: ''
   109            Listen ${toString port}
   110          '') cfg.port}
   111        '';
   112    };
   113  }
notable parts are:
  • (line 1-13) if interested in general nix language: read [1] page 6,7 (and 5 might also be interesting)
  • (line 18-66) where a list of options is declared (some with default arguments; some without default argument which will enforce the user to set them)
  • (line 76-79) where the service is started as a different user (security measure)
  • (line 92-97) where the configuration is generated on the fly (yes cntlm.conf is generated everytime the configuration in /etc/nixos/configuration.nix is changed). a user using the cntlm service on nixos does not change the cntlm.conf manually)
  • (line 99-111) where a minimal configuration (extract of the example cntlm.conf) is parameterized.
  • (line 106) where a list of items is transformed into a multi line structure where:
    cfg.proxy = [ “foo” “bar” “baz” ];
    is transformed into:
    Proxy foo
    Proxy bar
    Proxy baz

how to make use of the above expressions

a user has to append this configuration into /etc/nixos/configuration.nix and cntlm will be installed/configured and started

  services.cntlm = {
    proxy=[ "" ];


in contrast to most other distributions nixos makes not only packaging subject to a ‘clean’ package management but also configuration management (/etc stuff) and runtime management. this is a very clean design helping to avoid lots of pitfalls.


[1] http://www.st.ewi.tudelft.nl/~dolstra/pubs/nixos-jfp-final.pdf

Read Full Post »

finally … the online publication of my diploma thesis (DT) is here, it can be found online [1] including the source at [2].

i hope that the terminology introduced in this DT (chap 9) will be used. this is of course also true for concepts engineered in (chap 7).

candies can be found here:
 - chap 4: components used in a package manager
 - chap 4.6: different integration levels of a package manager
 - chap 5.13: ways to replicate a system
 - chap 5.15 ff
 - chap 6.2: evopedia deployment summary
 - chap 7 (you might want to read this a few times, it is quite complex)
 - chap 9: here i introduce some new terminology in package management
           (probably a _must read_)

see also the README for in [2] for further information.


[1] https://github.com/qknight/Multi-PlatformSoftwarePackageManagement/blob/master/Multi-PlatformSoftwarePackageManagement.pdf

[2] https://github.com/qknight/Multi-PlatformSoftwarePackageManagement

Read Full Post »


binary deployment‘ seems to be a good and fast solution nowadays (i’m talking about open source here). but what prove do i have to check if the source code was modified before compiled and signed (say by downstream::debian)?

Note: you can replace debian by any other distribution doing ‘binary deployment’ (it is just an example).

how is binary deployment actually done

this is very much distribution dependent. in general this workflow is used:

  1. download upstream source
  2. arrange a build environment
  3. apply ‘downstream’ patches
  4. install into DESTDIR/PREFIX and create an image from that
  5. finally distribute that image

(1) can be secured by signatures using cryptographic hashes and a sig file. (2) is complicated as a pure build environment CAN NOT be guaranteed by most distributions while a notable exception is nix as the build chain and all packages are pure (pure means that no mutual effects between two or more installed components do happen). (3) as downstream patches are usually very small they could be checked manually for security related issues.

security problems using binary deployment

downstream could simply add another ‘evil’ patch in step (3) but when the package got created, the source patch could be removed to hide the modification. this has happended already, see [2]. if the user wants to prevent such a situations there is a limited set of options. he could:

  • choose to only do  ‘source deployment’ (like in gentoo)
  • setup his own build environment (debian) which would transform the ‘binary deployment’ into ‘source deployment’
  • use tools like SELinux and AppArmor (but these tools work best on programs you can’t check as skype for instance or open source tools you assume ‘poor programming practice’ in regards to security)

.. another option

i’ve been plying with nix lately and as nix is a ‘purely functional package manager’ this implies that step (2) effects are minimized as components don’t interfere. as a result this means: if you clone the original build chain, you could expect the same outcome using the same input. so i experimented with two components:

  • vim
  • apache-httpd

the results are very promising as:

  • both projects have a 1:1 file mapping after reinstallation (that means reinstalling would result in the same files being created for each project)
  • only the binaries had differences, that is: both tools contain a timestamp which is of course different
  • DSO (dynamic shared objects) as modules/mod_cgi.so were not timestamped contrary to my expectation

Edit: it turns out that there was some research on this topic already, see [3] page 30. I quote it and hightlight some passages:

To ascertain how well these measures work in preventing impurities in NixOS, we performed two builds of the Nixpkgs collection6 on two different NixOS machines. This consisted of building 485 non-fetchurl derivations. The output consisted of 165927 files and directories. Of these, there was only one file name that differed between the two builds, namely in mono-1.1.4: a directory gac/IBM.Data.DB2/1.0.3008.37160 7c307b91aa13-
d208 versus 1.0.3008.40191 7c307b91aa13d208. The differing number is likely derived from the system time. We then compared the contents of each file. There were differences in 5059 files, or 3.4% of all regular files. We inspected the nature of the differences: almost all were caused by timestamps being encoded in files, such as in Unix object file archives or compiled Python code. 1048 compiled Emacs Lisp files differed because the hostname of the build machines were stored in the output. Filtering out these and other file types that are known to contain timestamps, we were left with 644 files, or 0.4%. However, most of these differences (mostly in executables and libraries) are likely to be due to timestamps as well (such as a build process inserting the build time in a C string). This hypothesis is strongly supported by the fact that of those, only 42 (or 0.03%) had different file sizes. None of these content differences have ever caused an observable difference in behaviour.

how did i do the checks

i used a prefix installation of nix on gentoo. i set the store path to something like ‘~/mynix/store’ so that every program needs to be recompiled (nix limitation/feature). afterwards i did:

nix-env -i apache-httpd

ls store| grep apache-httpd

cp -R store/gyp2arhqcglbq6iq1hndclljs7v9n30k-apache-httpd-2.2.17/ apache1

nix-env -e apache-http

nix-env –delete-generations old

nix-store –delete store/gyp2arhqcglbq6iq1hndclljs7v9n30k-apache-httpd-2.2.17/

and then do it again but copy to apache2/ instead. next start the comparing.

possible solution to the timestamp problem

as it seems that the timestamps are the only problems, here are some thoughts how to overcome this:

  • write a compare utility which ignores timestamps (of course one has to find such regions first)
  • always freeze the clock when compiling and setting it to a fixed time: this could be done by altering the libc library using LD_LIBRARY_PATH to map a indirection layer to the syscalls used for time/date things. remapping syscalls is nothing new (‘trickle is a portable lightweight userspace bandwidth shaper’ uses it).
    NOTE: this might have unknown side effects and needs to be evaluated as a fixed time will interfere with:

    1. a build environment measuring build-time using the time command
    2. resetting the clock might result in ‘clock screw detected’ messages and stop building, therefore all files need to be ‘touched’ in order to make that work
  • adding a PACKAGE_MANAGER_BUILD_TIME variable to the build environment. this implies one would either have to alter the buildchain (gcc timestamps) or one would have to patch upstream’s source dependent where that timestamp is applied. but the effect would be that the same timestamp is used resulting in a 1:1 match


  • i would really love to experiment further on this topic but i don’t have the time right now to do so. i hope that someone else might take over.
  • i also could imagine a ‘chain of trust’ using gpg signatures. this way we could have a several automated build systems monitoring the sanity of the builds.
  • i also don’t think that the ‘possible solutions’ are of limited use for distributions like debian (i think debian has some kind of build purity but i can’t find the docs right now) and alike.


[1] http://monkey.org/~marius/pages/?page=trickle

[2] https://www.redhat.com/archives/fedora-announce-list/2008-August/msg00012.html

[3] http://www.st.ewi.tudelft.nl/~dolstra/pubs/nixos-jfp-final.pdf

Read Full Post »

gentoo security

gentoo linux logo (copied from commons.wikipedia.org)


security & integrity

just a quick reminder on how to keep your gentoo system safe and sound:

  1. eix-sync; emerge -uDN world
    this will ensure that all active components (see  /var/lib/portage/world are updated)
    www-plugins/adobe-flash and app-text/acroread will not be updated if not listed there!
  2. check /var/lib/portage/world for software which you don’t need anymore
  3. afterwards: emerge –depclean
    packages not listed in /var/lib/portage/world will be removed (check carefully)
  4. use glsa-check -f affected (requires a recent eix-sync, done in step (1))
  5. update the gentoo-kernel as often as possible (reboot afterwards)
    recent kernels are a good thing: sys-kernel/gentoo-sources-2.6.32-r24
    NOTICE: i’m refering to the –r24 suffix and not the kernel version in general
  6. reboot your system on security related software/kernel updates
    (see glibc issue lately, which required a reboot)
  7. only start services you need, disable services which don’t get used anymore

this list is, of course, incomplete. but as i did this wrong until now, there might be other gentoo users out there, who still do. anyone?

saving disk space

check and clean this places:

  1. /tmp
  2. /var/tmp
  3. eclean -d distfiles
    which will clean out obsolete files from /usr/portage/distfiles
  4. /usr/src/
    (often there are old kernel versions, here it is 6,2 gb for 3 kernels)
  5. check /var/db/pkg/<CATEGORY>/<PKG-VERSION>/
    this often contained packages i’ve not been using anymore
  6. use emerge –depclean
  7. check /lib/modules for kernel module size
    i’ve been using genkernel and i had built all modules, resulting in 1,2gb per kernel

updated: 2011-01-04 added eclean to ‘saving disk space’, thanks to Leifbk

Read Full Post »