views:

388

answers:

8

I'm currently trying to get to grips with using linux. I'm running mint linux (an ubuntu variant).

When I install something using apt-get, files seem to get scattered across various system directories at random (system meaning not my home directory), of course I know this isn't at random it just seems this way at the moment.

However if I download the latest version of firefox from the website it just comes in a zip, I just extract it to one directory and run it.

I haven't tried compiling anything from source but I'd bet that it just compiles all to one directory then I run it from that directory.

  1. How come the difference between apt-get and doing it yourself?

  2. Whats the best approach for installing new apps?

  3. Any general pointers on what the various directories in linux are for and when I should tamper with them would be much appreciated too.

+8  A: 

Have a look at the Filesystem Hierarchy Standard. Software which gets installed by the systems packet manager (e.g. apt-get or aptitude) gets installed as the standard says. Often you have one or more binaries in /usr/bin, some documentation in /usr/share/doc/<package name>, configuration files in /etc and possibly some libs in /usr/lib.

If you download something from source you normally build/install it with

./configure
make
make install

./configure configures the build system to work on your system. make will build everything and make install will install it on your system. Where that is depends on the possibly supplied --prefix option for the configure-script. Normally that is /usr/local, binaries go to /usr/local/bin, libs to /usr/local/lib and so forth...

Johannes Weiß
A: 
  1. apt-get conforms to the aforementioned standard. It figures out and resolves dependencies, creates menu items etc.
  2. apt-get. One of the benefits is that an app installed with apt-get automatically gets updated.
Kees de Kooter
+4  A: 

(1) Because apt-get knows the layout of your system and knows the proper places to put various kinds of files. (Not "knows", actually more like "has been programmed", but I digress.) ZIP files don't normally include a filesystem layout; you're expected to pick out the pieces and put them in the appropriate directories yourself. Note that when you compile something from source, the last step is usually to run the command

make install

which will put all the pieces of the compiled program in the proper places (or at least, configure's best guess at the proper places) on your system.

(2) Best approach: use apt-get. If there is something you can't get from apt-get, second best is probably to download the source code and run the installation procedure, which is typically

./configure
make
make install

(note that all these commands are run from the directory which has the uncompiled source).

(3) is probably covered pretty well by the link Johannes Weiss gave you.

P.S. Many programs can be compiled and run without being "installed" - that is, you can run them straight from the directory you compile them in (or the directory you extract them in, if you download them in precompiled form). But in that case, they're not really integrated with the system as a whole; they're just things you happen to have in a directory somewhere. This approach can be used when you don't have root access on a Linux machine; you can download and compile a program in your home directory and run it from there.

David Zaslavsky
A: 

Never used Mint, but as an Ubuntu variant maybe you can get the latest version a piece of software from its launchpad ppa repository.

I.E: to install openoffice 3 in Ubuntu 8.10 intrepid ibex I added the following repository:

deb http://ppa.launchpad.net/openoffice-pkgs/ubuntu intrepid main

and then just installed it via apt-get.

alucardni
A: 

1.How come the difference between apt-get and doing it yourself?

Personally I prefer apt-get, if you haver your system under package manager control, then its easy to uninstall and update your installed soft.

2.Whats the best approach for installing new apps?

Personally I prefer apt-get but this is my choice. :-)

3.Any general pointers on what the various directories in linux are for and when I should tamper with them would be much appreciated too.

Read this link.

Regards

SourceRebels
+2  A: 

I haven't tried compiling anything from source but I'd bet that it just compiles all to one directory then I run it from that directory.

Sadly, no.

Most traditional apps and libraries based on the configure-build-install model also spew themselves over ‘lib’, ‘bin’, ‘include’ (and so on) directories.

The trick is you can change the ‘prefix’, usually by passing it to configure:

./configure --prefix=/anywhere

and the app will plonk itself into ‘/anywhere/lib’, ‘/anywhere/bin’ and so on. Prefix defaults to ‘/usr/local’, which is left as a place for you-as-admin to install any programs you want all users to have access to, but which are not part of the OS/distribution.

Dumping all your local programs into /usr/local is reasonable where there are only a few of them, but starts to get unmanageable when there are lots, or some of them start dropping insane amounts of files. (For example, MySQL and Apache both dump a horrible mess of little tools into .../bin.)

In this case you can make a single prefix for each app, such as ‘/usr/local/myapp’ or, commonly, ‘/opt/myapp’. This allows you to deal with the application as a single unit, so you can get rid of it by deleting it. But it's not a standalone unit you can rename or move, because the linker likes to know full pathnames.

You can also, as an unprivileged user, drop into a prefix in your home directory such as ‘/home/me/.local/myapp’. You can then add ‘/home/me/.local/myapp/bin’ to your PATH environment variable to be able to run it just by typing its command. If you are building a library other applications are going to have to link to, things are more complicated still, as you will have to tell them where to find it so they can get the ‘lib’ and ‘include’ files they need.

Distro-owned packages are (usually) compiled into the prefix ‘/usr’ — ‘/usr/bin’ contains all the commands provided by all the programs on the system, ‘/usr/lib’ contains all the libraries, and so on, each folder containing a thousand programs mixed together in a confusing heap.

Mess with these directories at your peril. Managing this maze is what package managers (such as apt) are for. It's a hard job, and sometimes the package manager gets confused and screws up leaving you in dependency hell, Linux's analogue to DLL hell. It's fun!

Of course this is all absolutely unsatisfactory, but you can't criticise it because It's The Unix Way™ and if you don't like it you should go back to Windows, you luser™. There are some efforts to improve the Unix packaging experience, such as GoboLinux and 0install, but they're still pretty much fringe activities at the moment. Most distros think that their package manager is the only tool you'll ever need, and can't imagine that you might want to install something that doesn't come from their repositories.

bobince
When using Debian derivatives it's rare to find a package that doesn't exist in their repositories unless you need fairly domain-specific tools or very specific versions. That said, it is all very satisfactory but of course Window l-users like to complain about it even though they are no better.
Adam Hawes
thanks, that was really informative!
bplus
A: 

It's traditional in bigger facilities to use /opt/* for your locally compiled programs, so files end up in /opt/bin, /opt/usr/bin, /opt/lib, etc. I think most people now use /opt/appname/* to avoid having a massive jumble. It makes PATH slightly more complex, but so what? Now those facilities probably create their own packages that can be installed with apt-get. It's a lot cleaner than trying to deploy the software on dozens or hundreds of systems.

/usr/local/* is reserved for stuff that is developed locally.

As to why stuff is scattered, there are various historical reasons for it but one big one is security. Different partitions are a real pain in Windows because they have different drive letters - C:, D:, Z:, etc. In Unix/Linux they're transparent. In Windows you can't easily set different properties on each partition, in Unix/Linux there are 'mount options' that can control a lot of behavior. The most obvious one to most users is that you can mount a partition as 'noexec' - user's can't run programs that are located in their own directories. This slams the door on a lot of abuses. Unix/Linux admins will also be familiar with 'nosuid' (user can runs applications, but typically not as a different user) and 'nodev' (user can't create their own special devices for raw access to hard disk, network, or other users' terminals).

bgiles
+1  A: 

The uses of the different directories are specified by the Filesystem Hierarchy Standard. The most important directories are:

  • /boot - files needed by the boot loader to load Linux
  • /etc - system-wide configuration files
  • /home - user files
  • /tmp - temporary files (usually lost on reboot)
  • /var - system working files (e.g. the print queue); slightly more permanent than /tmp

The actual software itself get scattered around all kinds of directories (/bin, /sbin, /usr/bin, /usr/lib, /usr/share, etc). This is for historical reasons.

There are several systems for putting each application in its own directory, including:

The question then is how to integrate these with your environment so you can use them:

  • GNU Stow scatters symbolic links to the program's files in the usual places, rather than the files themselves. Cleaning up then means deleting the program's directory and then searching for and removing broken links.

  • ROX application directories are run from the ROX desktop's file-manager (ROX-Filer). i.e. they don't integrate with other systems (the command line, etc).

  • Zero Install has various front-ends for different environments. For example, it can create a freedesktop.org .desktop file so that the program appears on your Applications menu, or it can create a launcher script in $PATH so you can run it from the shell prompt.

There's also the question of how programs find their dependencies (e.g. libraries):

  • For GNU Stow, this is using the symlinks it created. In general, this can lead to conflicts (just like using apt-get). e.g. you can't have two different versions of the same Python library installed at once.

  • For ROX libraries, the library's directory must go in a particular location. Again, this can't support multiple versions of a single library.

  • Zero Install selects dependencies at run-time and "injects" them into the program using environment variables, thereby avoiding conflicts altogether.

Thomas Leonard