Interpreters, Open Source, and Security vulnerabilities (was Re: [TriLUG] OT: www.hexblog.com - a fix for the WMF vulernability.)
Rick DeNatale
rick.denatale at gmail.com
Wed Jan 4 17:54:50 EST 2006
On 1/4/06, Tanner Lovelace <clubjuggler at gmail.com> wrote:
>
> I don't know about you, Rick, but I've always used the phrase
> "programming an interpreter" to indicate writing a program for
> an interpreter because to have it mean actually writing the interpreter
> itself would be silly.
Well, I come from a background where we wrote Smalltalk and Java VMs,
so "programming an interpreter" meant just that. I haven't heard many
people talk about programming the x interpreter, instead of writing an
x program where x might be BASIC, perl, ruby, ... unless they were
actually writing or changing an X interpreter.
The point of the distinction which came from earlier in this thread
was that the vulnerablilty here doesn't come from having an
interpreter involved, but from having executable code involved in the
voting machine code, and particularly in the auditing process, on the
removable device.
Interpreters tend to get bad raps for security and other things which
aren't justified. I used to expend a good portion of energy
disabusing customers of conventional wisdom about "interpreted"
languages.
Some language/library features do cause security problems, such as
dynamic compilation/execution, or perl's double-edged sword the
pseudo-file name like " |ls " to get a pipe from the stdout of a
shell, which opened the well-known awstats vulnerability. Languages
with these features do tend to be implemented with
interpreters/virtual machines, but they also can show up in or through
"compiled" languages in things like SQL bindings.
In some ways, users of compiled languages might be lulled into a
lowered state of vigilance for security vulnerabilities. As the
golfers say, "It ain't the arrow, it's the indian." Security is in
the hands of the craftsman more than the tools.
And in the case of, say a voting machine, where a dishonest company
MIGHT want to put in a backdoor to tilt the playing field, we need to
be careful not to let the salesman assure us that we don't need to
audit the system, no matter WHAT the underlying technology is, or
whether the alleged source code is available or not.
I don't remember how many recall the "obfuscated voting machine"
contest that was covered on /. a few years back. The winner was
portable C code which cooked the results, even only doing this on the
date of the election. Even with the source code, the knowledge that
it was crooked, AND a hint that it involved a buffer overflow, the
source code looked perfectly reasonable, to the extent that it
required stepping through the code with a debugger to find which line
of code changed the result, and then a good bit of head scratching to
figure out how it was doing it.
And, as an attempt to claim this as OT, I hope that some find this is
a valuable discussion of the real interactions between security,
trust, language implementations and features, and source availability.
I'm a computer geek with enough gray hair to know that I shouldn't
trust any single layer for protection against accidental or
intentional harm.
Peace, brother!
--
Rick DeNatale
Visit the Project Mercury Wiki Site
http://www.mercuryspacecraft.com/
More information about the TriLUG
mailing list