Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
I'm going to keep the previous commit in the history, in case we rethink it later, but it seems overly complicated. But the time I'd finished it, it was apparent that it doesn't actually *matter* what crap is in the hash tables; we can just be robust enough to cope. So go back to a simple 16-bit hash offset in the data structures (but keep them allocated at setup time instead of on the stack). When we first start walking a hash chain, it's simple enough to check that the first hofs we get from the hash_table is valid: - if it's later than the current position, it's obviously invalid. - if the hash value at hofs doesn't match, it's obviously invalid. - conversely, if the hash value *does* match and it's in the part of the packet that we have already processed, then we know it's valid because we will have *put* it there when we processed that offset in the current packet. So just do that validity check on hash_table[hash] when we first start looking, and reset it to INVALID_OFS if appropriate. This adds a little overhead, but it should still be cheaper than doing the full memset() each time, and simpler than the previous version with more consistent performance. Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
- Loading branch information