— A solution which is not precisely correct, but that solves the problem well enough for the time being. A quick-and-dirty solution.
The term "for the time being" here refers to some immediate need that can not be currently overcome, such as
- A lack of time or resources needed to produce a correct (or more correct) solution.
- A lack of understanding that is needed to produce a more correct solution (see Turing Test, below).
- A simple lack of will, or work-ethic needed to produce a proper solution.
Kluge is a difficult word because it is one of those words that people understand the meaning of, but the concept can be difficult to relate in concise, unambiguous terms. There is some irony in this.
In the age of impermanent text, this sometimes tends to decay further, into a definition that is forwarded merely because it is easy to relate in words, rather than a definition that fully describes the underlying concept. Ironically, the simplified, easy-to-explain definitions become examples of actual kluges.
A good metaphor for a kluge might be that it is like force-fitting an incorrect puzzle piece with another piece because its shape mates well with the other piece. The pieces work together just fine at the local and current level. Though even at this level one might perceive that the images don't quite match up. Worse, the kluge will cause increasingly greater problems as work on the puzzle progresses. The kluges that will be needed, to continue to come up with solutions to the puzzle, will become increasingly less effective and less fitting. It wont be long before the whole thing will become unworkable.
Another way to define words with difficult-to-relate meanings is with examples. The example (above) of oversimplified definitions of the word kluge being one.
. . . . . . .
Another example of a kluge is the Turing test as a means of determining if a computer has achieved true
intelligence (i.e., machine consciousness
). The test, which was originally designed to be a gender guessing test, isolates up to two participants into separate rooms, connected together only by terminals. A human at one terminal must guess if the person on the other side of the link is human or machine by conversing with them on the terminal. If a machine is able to fool the human into thinking it is human, this is considered an indication that true (or truer) conscious intelligence has been achieved.
This does not actually solve the problem of determining if the machine is truly conscious, but, at a time in our understanding when nobody can say, in concise (read: implementable) terms, what consciousness
is, it is, at least, something
that might help to move our efforts and understanding about consciousness
forward. The problem, of course, was, that it didn't lead to machine consciousness, or real intelligence. It merely led to increasingly sophisticated versions of simulations like ELIZA, which were, as you might guess, primarily designed only to fool humans into thinking they were human.