Feeds:
Posts
Comments

Archive for the ‘technology’ Category

what does ‘cloud’ stand for?

simply speaking, a cloud consists similar computers (homogeneous hardware). usually every single cloud computer runs the same OS (host system), each controlling various guests. the main technical motivations are:

  • load balancing of cpu load (move the VM to a machine with more CPU power)
  • load balancing of input/ouput load (RAM increase; faster storage raid; in memory databases)
  • load balancing of bandwidth usage (move the VM to the most demanding users)
  • increase redundancy (reduce hardware failures; reduce power loss issues)
the main non-technical motivations seem to be:
  • marketing – ‘cloud’ sounds cool, although a ‘cloud’ is basically just a subset of the internet
  • vendor lock-in – probably no surprise to anyone
  • centralization – cheaper to manage; grants more control over the platform

if you are a customer to a service which is hosted in the cloud you usually don’t see the cloud at all, hence the term ‘cloud’:

  • amazon: when buying books or other things
  • google: using google search; reading email; google maps
  • microsoft azure: whatever that platform is good for, is actually anybody using this?
if you are using one or more machines from a cloud, there are basically two interesting patterns:
  • actively maintain each computer: implement distributed file systems and distributed services
  • using an abstraction: someone implemented the handling of nodes and services you are going to develop thus it is running on top of an abstraction
to sum up, if you want to use cloud computers, you have to decide between:
  • SaaS – Software as a Service (something like google mail)
  • PaaS – Platform as a Service (something like ms azure / amazon e2)
as the trend in hardware design is going towards multicore along with NUMA it seems the cloud is undergoing similar changes. as a rule of thumb i’d say that ‘cloud computing can be seen as an approach to build a distributed operating system‘.

cloud problems

not too long ago you would have maintained your own infrastructure with access to the hardware and software used. but times have changed and the americanization of things, that is by building ‘super services’, is about to change the internet yet again.

i see this issues (no special order):

loss of control

this is probably the strongest argument against using third party proprietary services as you can’t fix it when it is broken. but cloud computing usually means a loss of privacy as well. the article [2] mentioning various points from richard stallman and larry ellison probably makes this point clear. it is interesting to see this SaaS wikipedia article [3] which reads like a campaign for SaaS – probably written by someone with a marketing background. there is the dangers to loose your data to foreign countries, as mentioned in [7].

loss of own infrastructure

you don’t have your own infrastructure anymore, thus you don’t have physical control over your devices. additionally you then depend on working internet connections. it is likely that the infrastructure you rely on runs in one or several different countries.

loss of software not designed for the cloud

the various versions of the GPL had a great influence on how software could be used and distributed but with the advent of the cloud this changes drastically. the way programs, especially webservices, are designed makes the GPL concept useless as it does not affect you at all. however, there is a new license, the ‘Affero General Public License’ [4] which fills that gap.

why is wordpress is not licensed AGPL i wonder? my first guess is laziness as every author of every single patch would have to be asked for license change persmission. but the wordpress hosters could be using the GPL to greenwash their software as they would not have to hand out proprietary extension which might not be released. but who knows?!

loss of knowledge how to setup services similar to today’s cloud servcies

think about email – who operates his own mailserver nowadays? most friends of mine use google mail and this implies: once you are familiar with a service and its workflow you usually do not want to change. especially if the service seems to be free as in google mail for example (but most of my friends seem not to care that google replaced ‘currency’ by ‘privacy’ which is used as payment instead).

as a consequence the knowledge about how to run your own mail server gets lost. if you understand german, listen to alternativlos 18 – ‘Peak Oil, den Weltuntergang, und wie man sich vorbereiten kann’ [5] minute 74 ff – they discuss this issue.

my personal experience

i have a strong tendency to use devices which are capable of bringing me certain services offline. this is why i put a lot of effort into the evopedia application for instance. the nokia n900 is probably another good example where i try to maintain an offline infrastructure – i didn’t even have mobile internet on the n900 for a complete year and yet i was able to do most things using sip/mappero/evopedia and others.

here are some thoughts about online services i use:

wordpress

i use wordpress.com right now and i really hate it for these points:

  • you can’t write offline
  • initial uploading images or updating them is a frustrating process
  • i sometimes loose parts of articles while writing
  • there is no good backup process for offline backups
  • i hate the WYSIWYG editor as it does not work very well
  • wordpress is inconsitent in producing a good web 2.0 workflow, it feels like reloading the page all the time instead of doing so for single dom-tree elements only, as it would be done with web 2.0; if you don’t trust me, have a look at [6] – how the upcoming wikipedia editor works
of course i could host wordpress on my own webserver and i wanted to do that for a long time. the problem is that wordpress is optimized to be run on wordpress.com thus i think it might be too much work for me to support it with proper security updates and plugin management. instead i search for a blog system which uses markdown in combination to git but i didn’t find yet what i am searching for.
don’t get me wrong, i really like wordpress but i don’t like this dependency and lack of flexibility using their software.

google mail/docs

i really love ‘google docs’ as it is a wonderful collaborative platform but i can’t use it as i have to disclose all documents to google i’d be working on.

google android

like google mail and google docs, android has a very good cloud integration. but if you want to use services other than google’s, it is a horrible platform. for instance i keep installing xabber [12] although google uses jabber but intentionally made you require to install third party software in order to use non google jabber. same goes for most other services. if i had to use an android phone i would buy one with proper CyanogenMod [13] support.

github.com

great service for source code hosting using git. still the platform itself is not available like for http://gitorious.org/ or  http://gitlabhq.com/. github.com uses a wiki which is bound to the platform and not contained in the git repo.

note: although i never used http://www.fossil-scm.org i like the idea that it contains a wiki in the repository as well

i use github.com only for free and open source projects.

better without clouds

the conventional use of the term ‘cloud’ simply indicates a buzzword or business term for vendor lock-in and centralized infrastructure you don’t have control of. that is good to know as it helps to recognize and avoid such services. what one should use instead is decentralized infrastructure located near the user, connected to the internet where needed, giving the user the control over the platform.

arguably this concept is implemented as a new trend called ‘personal cloud‘ or ‘private cloud server‘. but these terms are limiting the trend to personal or private matters, yet i would like to see it in businesses as well.

hardware

following the concept of decentralization users can host their own files and other things as address books / calenders on their own home devices.

a list of interesting devices to give you an idea:

  • sheevaplug [9] – there is even a nixos version for this device (by viric)!
  • pogoplug [10]
  • tonidoplug [11]
  • fritz!box (with myfritz and fritznas) [14]

software

software implementing services

a list of software i find interesting:

  • despora [15] – decentralized facebook
  • owncloud [16] – dropbox like service
  • sparkleshare [21] – is a collaboration and sharing tool that is designed to keep things simple and to stay out of your way.
  • tomahawk [19] – a nice music streaming service
  • various p2p / torrent like services:
    • mldonkey [17]

still most ‘personal or private clouds’ scale differently compared to the big 3 mentioned in the beginning of this article. for instance, most of these services are configured in the client/server way and they usually do not implement concepts as failover, backups or load balancing. for that to happen it requires a new set of tools and decentralized frameworks based on p2p technologies – which has just not happened yet.

there is also a political issue: most internet users do not have a decent upload channel, which basically means that their internet connection is not very good.

software for managing services

  • openshift [20] – is a cloud computing platform as a service product from red hat
  • openstack [21] – is a global collaboration of developers and cloud computing technologists producing the ubiquitous open source cloud computing platform for public and private clouds.
  • disnix [8] – is a distributed deployment extension for Nix, a purely functional package manager.
i’ve used neither but i like to point out that there is ongoing open source involvement and interestingly non of these technologies are used in private clouds. private clouds seem to implement the classical client/server paradigm at the moment. there is a remarkable exception, that is filesharing using p2p/kademlia which implements a basically read only storage which scales pretty well already.

a matter of design

to make the private cloud or a decentralized cloud a success we need:

  • a standardized package manager with proper software life-cycle management
  • symmetrical internet connections with decent upload/download speeds
  • transparent support for scalability/reliability/redundancy (the points mentioned in the beginning of the article)
  • powerful hardware with low power usage but capable of high loads
  • encryption and certificates or a chain of trust
  • ipv6 – we need good endpoint communication capabilities
  • a clear understanding of where we want to put our personal data and how we can protect it
i think each requirement on its own is already implemented somewhere but not in combination to each. there is not yet a library providing the software/protocol requirements and the hardware is either not powerful enough or is not intended to be used in that way required.

conclusion

still it is a long way for the private clouds to have the same level of features/quality as the big clouds already have. for the time being it seems to be complicated for the average internet user to use the internet without loosing too much of his individuality, thus the freedom of expression.

links

[1] http://www.google.de/search?sourceid=chrome&ie=UTF-8&q=richard+stallman+cloud

[2] http://www.guardian.co.uk/technology/2008/sep/29/cloud.computing.richard.stallman

[3] http://de.wikipedia.org/wiki/Software_as_a_Service

[4] http://en.wikipedia.org/wiki/Affero_General_Public_License

[5] http://alternativlos.org/18/

[6] https://www.mediawiki.org/wiki/VisualEditor:InezSandbox

[7] http://www.engadget.com/2011/06/30/microsoft-european-cloud-data-may-not-be-immune-to-the-patriot/

[8] http://nixos.org/disnix/

[9] http://de.wikipedia.org/wiki/SheevaPlug

[10] http://pogoplug.com/

[11] http://en.wikipedia.org/wiki/Tonido

[12] http://www.xabber.com/

[13] http://www.cyanogenmod.com/

[14] https://www.myfritz.net/was_ist_myfritz.xhtml

[15] http://de.wikipedia.org/wiki/Diaspora_(Software)

[16] http://de.wikipedia.org/wiki/Owncloud

[17] http://de.wikipedia.org/wiki/Mldonkey

[18] http://trac.edgewall.org/

[19] http://www.tomahawk-player.org/

[20] http://en.wikipedia.org/wiki/OpenShift

[21] http://openstack.org/

[22] http://sparkleshare.org/

Advertisements

Read Full Post »

Evopedia Icon

for quite some time i use a wiki at lastlog.de, a mediawiki to be precise, and i wonder why there is no wide adaptation towards the wiki principle. with that i don’t mean collaborative editing but, somehow in contrast, the principle to be quotable.

lately, out of curiosity, i scrolled through my diploma thesis and checked the overall link stability. some were broken. however, all wikipedia links worked. as stated in the document itself, i explicitly link to the wikipedia because of its link stability. if i would have liked i could have even linked to a certain revision. but i decided not to, as the reader always has the option to look at an older revision, based on date and time.

the more interesting aspect, that is why i linked to wikipedia articles, is that i don’t want to waste time describing something when there is a different place doing so already. if someone is smart enough to follow my ideas in my diploma thesis i assume the same when it comes to judging about the quality of wikipedia articles. and before linking a keyword (like ‘package manager’) to a certain wikipedia article, which should describe it, i always read the article. the idea is twofold: first i like to see if my conception or understanding matches with what is in the article. second, if that is the case, i would simply link it and forget about the whole thing. but if my understanding does not match with the article i can evaluate my or their version as being better and pick what fits best.

for some online articles i had to link, i wasn’t even able to provide a direct link and therefore added a google search link into the document.

wiki editing has so many benefits, like being able to rollback to a previous version. do collaborative work. why is there no wiki like support, say when editing libre office/word documents? maybe because back in time that was considered a waste of bits&bytes but using compression that can’t be an argument today.

here is a use-case where that would be great: say you write a document and you pass it to someone else for review and corrections. often i would like the other person doing whatever change he wants to do and later be able to rollback this or that change. with a wiki like document structure this would be very easy.

if you don’t follow, just have a look at this link:

http://en.wikipedia.org/w/index.php?title=Linux&diff=490431450&oldid=489027763

and about link stability: this link might even work when this blog is long gone. 

i see so many benefits by using wikis and wiki like concepts but despite of the wiki-web principle and decentralized VCSs there seems to be no wide use of it.

IMHO i think a webpage, even this wordpress blog, which does not implement a wiki principle, is kind of stupid as one can never be certain what is going on. one could say such a page is schizophrenic to some degree.

hopefully this will change in the future.

update: 11.5.2012 – it would be desirable if the mentioned link stability would be independent of a strict TLD (top level domain). for example: if i move this blog to a different location, say to invalidmagic.de then all the articles here stop working and the links from other pages into this article will fail.

Read Full Post »

documentaries

purple_podcasts from harenome razanajato

here is another bunch of documentaries which i forgot in the last posting…

space science related documentaries

energy related

Read Full Post »

documentaries

purple_podcasts from harenome razanajato

here is a bunch of documentaries which i would like to point out because of their exceptional quality:

space science related documentaries

  • BBC SPACE (very good) but i can’t provide a link as i can’t find any… *bummer*
    • BBC Space/1 – Star Stuff
    • BBC Space/2 – Are We Alone
    • BBC Space/3 – Staying Alive
    • BBC Space/4 – New Worlds
    • BBC Space/5 – Black Hole
    • BBC Space/6 – Boldly Go

note: for all this nasa missions i wonder how they get all the funding. and why there is so much military involvement (especially in ‘extreme astronomy’)?

energy related

note: if you are interested in a sustainable concept on how to solve the energy problem with 100% renewables, read the books from hermann scheer:

podcasts (audio only)

a few different podcasts i liked very much:

Read Full Post »

how to integrate a daemon into nixos

in this short article i want to show how services can be added/used in nixos. i personally find the nixos way a quite easy and ‘clean’ approach!

as i integrated cntlm my goals were:

  • easy to use
  • password must be stored in a safe place
  • service should not be run as root but as a special user: cntlm
it helped a lot to look at similar scripts like the sshd integration but i also used “NixOS: A Purely functional Linux Distribution” [1] which is describing most aspects which are needed to get a service up and running.
seen as a developer one has to write two nix expressions:
  • cntlm/default.nix
    the script which describes the software cntlm
  • nixos/modules/services/networking/cntlm.nix
    the script which descirbes the service
seen as a user one has to modify only one nix expression:
  • /etc/nixos/configuration.nix
    the place where all the configuration happens

/etc/nixos/nixpkgs/pkgs/tools/networking/cntlm/default.nix

this nix expression describes the cntlm software and is quite simple as it basically fetches the software/compiles it/installs it into the system. however, a user does not have to install this software using:

  • nix-env -i cntlm
still this would be possible, and a normal user/root user could use the software in his profile this way.
anyway here is the script:
     1  { stdenv, fetchurl, which}:
     2
     3  stdenv.mkDerivation {
     4    name = "cntlm-0.35.1";
     5
     6    src = fetchurl {
     7      url = mirror://sourceforge/cntlm/cntlm-0.35.1.tar.gz;
     8      sha256 = "7b3fb7184e72cc3f1743bb8e503a5305e96458bc630a7e1ebfc9f3c07ffa6c5e";
     9    };
    10
    11    buildInputs = [ which ];
    12
    13    installPhase = ''
    14      ensureDir $out/bin; cp cntlm $out/bin/;
    15      ensureDir $out/share/; cp COPYRIGHT README VERSION doc/cntlm.conf $out/share/;
    16      ensureDir $out/man/; cp doc/cntlm.1 $out/man/;
    17    '';
    18
    19    meta = {
    20      description = "Cntlm is an NTLM/NTLMv2 authenticating HTTP proxy";
    21      homepage = http://cntlm.sourceforge.net/;
    22      license = stdenv.lib.licenses.gpl2;
    23      maintainers = [ stdenv.lib.maintainers.qknight ];
    24    };
    25  }

the only point of interest might be the buildInputs (line 11) which includes which. in this build script ‘which gcc’ is used to test if gcc is installed.

/etc/nixos/nixos/modules/services/networking/cntlm.nix

this expression is used to integrate cntlm as a system service.

     1  { config, pkgs, ... }:
     2
     3  with pkgs.lib;
     4
     5  let
     6
     7    cfg = config.services.cntlm;
     8    uid = config.ids.uids.cntlm;
     9
    10  in
    11
    12  {
    13
    14    options = {
    15
    16      services.cntlm= {
    17
    18        enable = mkOption {
    19          default = false;
    20          description = ''
    21            Whether to enable the cntlm, which start a local proxy.
    22          '';
    23        };
    24
    25        username = mkOption {
    26          description = ''
    27            Proxy account name, without the possibility to include domain name ('at' sign is interpreted literally).
    28          '';
    29        };
    30
    31        domain = mkOption {
    32          description = ''Proxy account domain/workgroup name.'';
    33        };
    34
    35        password = mkOption {
    36          default = "/etc/cntlm.password";
    37          type = with pkgs.lib.types; string;
    38          description = ''Proxy account password. Note: use chmod 0600 on /etc/cntlm.password for security.'';
    39        };
    40
    41        netbios_hostname = mkOption {
    42          default = config.networking.hostName;
    43          description = ''
    44            The hostname of your workstation.
    45          '';
    46        };
    47
    48        proxy = mkOption {
    49          description = ''
    50            A list of NTLM/NTLMv2 authenticating HTTP proxies.
    51
    52            Parent proxy, which requires authentication. The same as proxy on the command-line, can be used more than  once  to  specify  unlimited
    53            number  of  proxies.  Should  one proxy fail, cntlm automatically moves on to the next one. The connect request fails only if the whole
    54            list of proxies is scanned and (for each request) and found to be invalid. Command-line takes precedence over the configuration file.
    55          '';
    56        };
    57
    58        port = mkOption {
    59          default = [3128];
    60          description = "Specifies on which ports the cntlm daemon listens.";
    61        };
    62
    63       extraConfig = mkOption {
    64          default = "";
    65          description = "Verbatim contents of cntlm.conf.";
    66       };
    67
    68      };
    69
    70    };
    71
    72
    73    ###### implementation
    74
    75    config = mkIf config.services.cntlm.enable {
    76      users.extraUsers = singleton {
    77          name = "cntlm";
    78          description = "cntlm system-wide daemon";
    79          home = "/var/empty";
    80      };
    81
    82      jobs.cntlm = {
    83          description = "cntlm is an NTLM / NTLM Session Response / NTLMv2 authenticating HTTP proxy.";
    84          startOn = "started network-interfaces";
    85          environment = {
    86          };
    87
    88      preStart = '' '';
    89
    90      daemonType = "fork";
    91
    92      exec =
    93        ''
    94          ${pkgs.cntlm}/bin/cntlm -U cntlm \
    95          -c ${pkgs.writeText "cntlm_config" cfg.extraConfig}
    96        '';
    97      };
    98
    99      services.cntlm.extraConfig =
   100        ''
   101          # Cntlm Authentication Proxy Configuration
   102          Username        ${cfg.username}
   103          Domain          ${cfg.domain}
   104          Password        ${cfg.password}
   105          Workstation     ${cfg.netbios_hostname}
   106          ${concatMapStrings (entry: "Proxy ${entry}\n") cfg.proxy}
   107
   108          ${concatMapStrings (port: ''
   109            Listen ${toString port}
   110          '') cfg.port}
   111        '';
   112    };
   113  }
notable parts are:
  • (line 1-13) if interested in general nix language: read [1] page 6,7 (and 5 might also be interesting)
  • (line 18-66) where a list of options is declared (some with default arguments; some without default argument which will enforce the user to set them)
  • (line 76-79) where the service is started as a different user (security measure)
  • (line 92-97) where the configuration is generated on the fly (yes cntlm.conf is generated everytime the configuration in /etc/nixos/configuration.nix is changed). a user using the cntlm service on nixos does not change the cntlm.conf manually)
  • (line 99-111) where a minimal configuration (extract of the example cntlm.conf) is parameterized.
  • (line 106) where a list of items is transformed into a multi line structure where:
    cfg.proxy = [ “foo” “bar” “baz” ];
    is transformed into:
    Proxy foo
    Proxy bar
    Proxy baz

how to make use of the above expressions

a user has to append this configuration into /etc/nixos/configuration.nix and cntlm will be installed/configured and started

  services.cntlm = {
    enable=true;
    username="myusername";
    domain="mydomain";
    proxy=[ "192.168.3.5:1234" ];
  };

summary

in contrast to most other distributions nixos makes not only packaging subject to a ‘clean’ package management but also configuration management (/etc stuff) and runtime management. this is a very clean design helping to avoid lots of pitfalls.

links

[1] http://www.st.ewi.tudelft.nl/~dolstra/pubs/nixos-jfp-final.pdf

Read Full Post »

finally … the online publication of my diploma thesis (DT) is here, it can be found online [1] including the source at [2].

i hope that the terminology introduced in this DT (chap 9) will be used. this is of course also true for concepts engineered in (chap 7).

candies can be found here:
 - chap 4: components used in a package manager
 - chap 4.6: different integration levels of a package manager
 - chap 5.13: ways to replicate a system
 - chap 5.15 ff
 - chap 6.2: evopedia deployment summary
 - chap 7 (you might want to read this a few times, it is quite complex)
 - chap 9: here i introduce some new terminology in package management
           (probably a _must read_)

see also the README for in [2] for further information.

links

[1] https://github.com/qknight/Multi-PlatformSoftwarePackageManagement/blob/master/Multi-PlatformSoftwarePackageManagement.pdf

[2] https://github.com/qknight/Multi-PlatformSoftwarePackageManagement

Read Full Post »

source: http://libvirt.org/motivation

i have a gentoo system inside a virtualbox but i wanted to make some ‘long term tests’ so i decided to migrate it to a libvirt machine which is running ‘fedora core 15 beta’.

problems converting the image

first i tried to migrate the ‘Gentoo 64 (portage).vdi’ directly to a libvirt image, using [2]. but anything i tried: afterwards the image was never bootable so i decided to use ssh to copy all the files instead.

  1. boot both virtual machines using the ‘grml64-mediaum_2010.12.iso‘.
  2. assign the ip addresses
    while i was using on the virtualbox side using: vboxnet0 in a host only networking schema i used a bridge on the other machine which involved lots of manual configuration as: disable networkmanager (on fedora core, remember?), removing the eth0 configuration (which happens to be called em1); adding a new configuration for the bridge br0 (using eth0).
  3. finally i could ping from the virtualbox image to the libvirt guest system
  4. i used ‘rsync -av /mnt/gentoo -e ssh 192.168.66.20:/mnt/gentoo
    Note: both local gentoo systems were mounted into /mnt/gentoo
  5. but libvirt used a ide host controller (which was very slow)
    therefore i manually removed the ide controller and replaced it by a VirtIO Disk using ‘qcow2’ as storage format and ‘Virtio’ as bus.
  6. after all the copying i installed grub (grub-1.99rc1) but the original system had a grub1 config!
    the conversion was not simple!

The grub pitfall

virtualbox image using grub1:

cat /boot/grub/menu.lst

default 0
timeout 30
#splashimage=(hd0,0)/boot/grub/splash.xpm.gz

title Gentoo Linux 2.6.24-r7
root (hd0,0)
kernel /boot/kernel-genkernel-x86_64-2.6.36-gentoo-r5  root=/dev/ram0 real_root=/dev/sda1
initrd /boot/initramfs-genkernel-x86_64-2.6.36-gentoo-r5

in comparison: ‘libvirt guest’ using grub2

cat /boot/grub/grub.cfg

set default=0
set timeout=30

menuentry “Gentoo Linux 2.6.36-gentoo-r5” {
        insmod part_msdos
        insmod ext2
set root=(hd0,msdos1)
linux /boot/kernel-genkernel-x86_64-2.6.36-gentoo-r5  root=/dev/ram0 real_root=/dev/vda1
initrd /boot/initramfs-genkernel-x86_64-2.6.36-gentoo-r5
}

Note: i marked the differences.

Note: take care of the different filename as well!

anyway: in the grml shell you can install grub into /dev/vda using:

grub-install –root-directory=/mnt/gentoo /dev/vda

the kernel configuration pitfall

a libvirt guest must be aware of /dev/vda (virtIO) but my genkernel was not. also i lacked ext4 support. so it is a good idea to included this into the kernel (i had it included as modules but it did not work well).

cat /etc/kernels/kernel-config-x86_64-2.6.36-gentoo-r5 | grep -i virt | grep -v “^#”

CONFIG_VIRT_TO_BUS=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO_NET=y
CONFIG_VIRTIO_CONSOLE=y
CONFIG_HW_RANDOM_VIRTIO=y
CONFIG_VIRTUALIZATION=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_RING=y
CONFIG_VIRTIO_PCI=y
CONFIG_VIRTIO_BALLOON=y

just use ‘genkernel’ to build the new kernel (and don’t forget the ext4 support as i did).

fedora core network problems

i basically used [3] to make it work. the benefit is now that em1 is not used directly but the system uses br0 to access the internet.

PRO: the libvirt guests do get their own ‘mac address’, thus are separated from being able to see each others traffic.

fedora core yum problems

i also tried to install virtualbox and followed the instructions found on virtualbox.org but soon i had the problem that the virtualbox kernel modules won’t build and need ‘kernel-devel’ but after installing the kernel-devel package using ‘yum install kernel-devel’ there was a mismatch between ‘used kernel’ and ‘kernel-devel’ headers.

summary

libvirt and the ‘virtual machine manager’ are very nice:

  • i like that it is so easy to start a virtual machine when the host machine boots.
  • i also like the ‘virtual machine manager’ as it shows cpu/disk io/network io nicely
    (but that is not limited to libvirt virtualizations).
  • fedora core 15 beta was running quite nicely (except that it crashed while i was writing this article)
    so i can at least say: it ran for straight 6hours without crash ;P

links

[1] http://libvirt.org/

[2] http://blog.loxal.net/2009/04/how-to-convert-vdi-to-vmdk-converting.html

[3] http://www.howtoforge.com/virtualization-with-kvm-on-a-fedora-11-server

Read Full Post »

Older Posts »