Xteink-X4-crosspoint-reader/lib/Epub/Epub/BookMetadataCache.h
Daniel Chelling 83315b6179
perf: optimize large EPUB indexing from O(n^2) to O(n) (#458)
## Summary

Optimizes EPUB metadata indexing for large books (2000+ chapters) from
~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n)
hash-indexed lookups.

Fixes #134

## Problem

Three phases had O(n²) complexity due to nested loops:

| Phase | Operation | Before (2768 chapters) |
|-------|-----------|------------------------|
| OPF Pass | For each spine ref, scan all manifest items | ~25 min |
| TOC Pass | For each TOC entry, scan all spine items | ~5 min |
| buildBookBin | For each spine item, scan ZIP central directory | ~8.4
min |

Total: **~30+ minutes** for first-time indexing of large EPUBs.

## Solution

Replace linear scans with sorted hash indexes + binary search:

- **OPF Pass**: Build `{hash(id), len, offset}` index from manifest,
binary search for each spine ref
- **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine,
binary search for each TOC entry
- **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single
ZIP central directory scan with batch hash matching

All indexes use FNV-1a hashing with length as secondary key to minimize
collisions. Indexes are freed immediately after each phase.

## Results

**Shadow Slave EPUB (2768 chapters):**

| Phase | Before | After | Speedup |
|-------|--------|-------|---------|
| OPF pass | ~25 min | 10.8 sec | ~140x |
| TOC pass | ~5 min | 4.7 sec | ~60x |
| buildBookBin | 506 sec | 34.6 sec | ~15x |
| **Total** | **~30+ min** | **~50 sec** | **~36x** |

**Normal EPUB (87 chapters):** 1.7 sec - no regression.

## Memory

Peak temporary memory during indexing:
- OPF index: ~33KB (2770 items × 12 bytes)
- TOC index: ~33KB (2768 items × 12 bytes)
- ZIP batch: ~44KB (targets + sizes arrays)

All indexes cleared immediately after each phase. No OOM risk on
ESP32-C3.

## Note on Threshold

All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve
existing behavior for small books. However, the algorithms work
correctly for any book size and are faster even for small books:

| Book Size | Old O(n²) | New O(n log n) | Improvement |
|-----------|-----------|----------------|-------------|
| 10 ch | 100 ops | 50 ops | 2x |
| 100 ch | 10K ops | 800 ops | 12x |
| 400 ch | 160K ops | 4K ops | 40x |

If preferred, the threshold could be removed to use the optimized path
universally.

## Testing

- [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and
navigates correctly
- [x] Normal book (87 chapters): 1.7s indexing, no regression
- [x] Build passes
- [x] clang-format passes

## Files Changed

- `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index
- `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size
lookup
- `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API
- `lib/Epub/Epub.cpp` - Timing logs

<details>
<summary><b>Algorithm Details</b> (click to expand)</summary>

### Phase 1: OPF Pass - Manifest to Spine Lookup

**Problem**: Each `<itemref idref="ch001">` in spine must find matching
`<item id="ch001" href="...">` in manifest.

```
OLD: For each of 2768 spine refs, scan all 2770 manifest items
     = 7.6M string comparisons

NEW: While parsing manifest, build index:
     { hash("ch001"), len=5, file_offset=120 }
     
     Sort index, then binary search for each spine ref:
     2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons
```

### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup

**Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find
its spine index.

```
OLD: For each of 2768 TOC entries, scan all 2768 spine entries
     = 7.6M string comparisons

NEW: At beginTocPass(), read spine once and build index:
     { hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 }
     
     Sort index, binary search for each TOC entry:
     2768 × log₂(2768) ≈ 30K comparisons
     
     Clear index at endTocPass() to free memory.
```

### Phase 3: buildBookBin - ZIP Size Lookup

**Problem**: Need uncompressed file size for each spine item (for
reading progress). Sizes are in ZIP central directory.

```
OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries)
     = 7.6M filename reads + string comparisons
     Time: 506 seconds

NEW: 
  Step 1: Build targets from spine
          { hash("OEBPS/chapter0001.xhtml"), len=25, index=0 }
          Sort by (hash, len)
  
  Step 2: Single pass through ZIP central directory
          For each entry:
            - Compute hash ON THE FLY (no string allocation)
            - Binary search targets
            - If match: sizes[target.index] = uncompressedSize
  
  Step 3: Use sizes array directly (O(1) per spine item)
  
  Total: 2773 entries × log₂(2768) ≈ 33K comparisons
  Time: 35 seconds
```

### Why Hash + Length?

Using 64-bit FNV-1a hash + string length as a composite key:
- Collision probability: ~1 in 2⁶⁴ × typical_path_lengths
- No string storage needed in index (just 12-16 bytes per entry)
- Integer comparisons are faster than string comparisons
- Verification on match handles the rare collision case

</details>

---

_AI-assisted development. All changes tested on hardware._
2026-01-28 01:29:15 +11:00

113 lines
3.1 KiB
C++

#pragma once
#include <SDCardManager.h>
#include <algorithm>
#include <string>
#include <vector>
class BookMetadataCache {
public:
struct BookMetadata {
std::string title;
std::string author;
std::string language;
std::string coverItemHref;
std::string textReferenceHref;
};
struct SpineEntry {
std::string href;
size_t cumulativeSize;
int16_t tocIndex;
SpineEntry() : cumulativeSize(0), tocIndex(-1) {}
SpineEntry(std::string href, const size_t cumulativeSize, const int16_t tocIndex)
: href(std::move(href)), cumulativeSize(cumulativeSize), tocIndex(tocIndex) {}
};
struct TocEntry {
std::string title;
std::string href;
std::string anchor;
uint8_t level;
int16_t spineIndex;
TocEntry() : level(0), spineIndex(-1) {}
TocEntry(std::string title, std::string href, std::string anchor, const uint8_t level, const int16_t spineIndex)
: title(std::move(title)),
href(std::move(href)),
anchor(std::move(anchor)),
level(level),
spineIndex(spineIndex) {}
};
private:
std::string cachePath;
size_t lutOffset;
uint16_t spineCount;
uint16_t tocCount;
bool loaded;
bool buildMode;
FsFile bookFile;
// Temp file handles during build
FsFile spineFile;
FsFile tocFile;
// Index for fast href→spineIndex lookup (used only for large EPUBs)
struct SpineHrefIndexEntry {
uint64_t hrefHash; // FNV-1a 64-bit hash
uint16_t hrefLen; // length for collision reduction
int16_t spineIndex;
};
std::vector<SpineHrefIndexEntry> spineHrefIndex;
bool useSpineHrefIndex = false;
static constexpr uint16_t LARGE_SPINE_THRESHOLD = 400;
// FNV-1a 64-bit hash function
static uint64_t fnvHash64(const std::string& s) {
uint64_t hash = 14695981039346656037ull;
for (char c : s) {
hash ^= static_cast<uint8_t>(c);
hash *= 1099511628211ull;
}
return hash;
}
uint32_t writeSpineEntry(FsFile& file, const SpineEntry& entry) const;
uint32_t writeTocEntry(FsFile& file, const TocEntry& entry) const;
SpineEntry readSpineEntry(FsFile& file) const;
TocEntry readTocEntry(FsFile& file) const;
public:
BookMetadata coreMetadata;
explicit BookMetadataCache(std::string cachePath)
: cachePath(std::move(cachePath)), lutOffset(0), spineCount(0), tocCount(0), loaded(false), buildMode(false) {}
~BookMetadataCache() = default;
// Building phase (stream to disk immediately)
bool beginWrite();
bool beginContentOpfPass();
void createSpineEntry(const std::string& href);
bool endContentOpfPass();
bool beginTocPass();
void createTocEntry(const std::string& title, const std::string& href, const std::string& anchor, uint8_t level);
bool endTocPass();
bool endWrite();
bool cleanupTmpFiles() const;
// Post-processing to update mappings and sizes
bool buildBookBin(const std::string& epubPath, const BookMetadata& metadata);
// Reading phase (read mode)
bool load();
SpineEntry getSpineEntry(int index);
TocEntry getTocEntry(int index);
int getSpineCount() const { return spineCount; }
int getTocCount() const { return tocCount; }
bool isLoaded() const { return loaded; }
};