In traditional classical logic, given a logic program S which is inconsistent (unsatisfiable), then S |= p for all p. That is, any proposition p may be derived from S, since "any conclusion follows a false premise." While logically sound, this is somewhat contrary to ordinary, common sense reasoning. In every day life, people usually do not give up and allow for any conclusion whenever they are confronted with inconsistent or contradictory information. Rather, they find some means to resolve the conflicts in order to arrive at some definite conclusion.
As an example, consider the jury process. Jurors in a trial are often presented with evidence that both support and refute a proposition ("the defendant is guilty") but usually still manage to come to a conclusion ("guilty" / "not guilty"). Probably one mean by which jurors resolve conflicting evidence is to isolate subsets of evidence which are consistent and which tend to support some intermediate conclusion. For instance, evidence A, B, and C may point to a sufficient 'window of opportunity' and P, Q, and R may support the theory that the defendant committed the crime by X means. Now, it's possible that A contradicts P so that the body of evidence as a whole is inconsistent. Nevertheless, these intermediate conclusions are useful as a basis for a final decision (verdict).
The logical concepts introduced in [Fisher95] represent an attempt to help formalize this process of first identifying subsets of consistent information and then (possibly) making further derivations from them. They provide a framework for dealing with logic programs which may be inconsistent as a whole but which may nevertheless contain interesting sub-programs that are in themselves consistent.