[TriLUG] Debian Sid/Unstable
Kevin Hunter Kesling
hunteke at earlham.edu
Fri Feb 28 09:53:31 EST 2014
At 7:36pm -0500 Wed, 26 Feb 2014, Phil Smith wrote:
> "Unstable" doesn't mean what most people assume. Pick any package's
> version in Sid and check vs. the upstream project page, you'll often
> find Sid ("Still In Development") doesn't even have the latest
> version the project maintainer considers "stable". What "unstable"
> refers to the most is that some developers are developing against
> fairly old "stable" libraries from 2-3 years ago. The shared
> libraries evolve over time, with new/renamed entry points, and in
> some cases there just is no way to build a system that has both
> package X and package Y until the developers update their code to
> work with current releases of shared libraries.
Note: I've not studied this in-depth (only off-hand idle thinking and
observation), so corrections to my facts and thinking are welcome.
You know, this is something about which I've idly wondered since roughly
~2006 when I first got my mitts on an OS X machine, and when I realized
that HDD space was effectively unlimited (from the standpoint of someone
who entered the computing world when 120 MiB was considered excessive).
I recall noting that Apple's solution to the DLL-hell of Windows was
to encourage application-specific copies of common libraries. Through a
combination of Xcode, developer recommendations, and providing only a
small but stable system ABI, they simultaneously enabled programs to
"just work", and also made uninstall easy:
1. "Just working" came from applications only accessing essential
(stable) system ABIs, and all other references being relative
to the application-specific directory.
2. As all application-specific code was maintained in application-
specific directories, there was no issue with botched uninstalls:
there was no system "registry" in which to forget entries, and
no libraries stored in the system directories that _may_ have
had multiple programs relying on them. Just throw away the
directory, and the program was removed. (This further made
the user-interaction paradigm very intuitive for end-users:
trash the "application", which is just a directory with an icon.)
Having multiple copies of nearly identical code is perhaps wasteful in
terms of disk-space, but code size pales in comparison to user data
(e.g., movies, photos, music) and doing so enables the above 2 points.
This was the case since at least since 2006. We're now 8-years later,
and I'm curious -- for exactly the reason you mention about packages X
and Y having conflicting system library requirements -- why at least the
desktop Linux distributions haven't attempted to migrate to something
along these lines. As I understand it, this is the main reason why it
takes whole new versions of distros to get updated application packages.
I comprehend this restriction, perhaps, for packages that rely on
certain kernel interfaces, but the rest?
If the concern is disk-usage and multiple copies of the same library in
umpteen directories, I might argue block-level deduplication as an
answer (e.g., btrfs, zfs).
Perhaps this gets tricky with the Unix philosophy that tools do one
thing and one thing well? i.e., is it possible that that the command
line interaction between, say, GNU grep v(n-5) and v(n+1) is enough
different to matter?
Does anyone more informed than myself have an insight into this, beyond
"that's just the way it's always been done"?
Kevin
More information about the TriLUG
mailing list