## Summary
Optimizes EPUB metadata indexing for large books (2000+ chapters) from
~30 minutes to ~50 seconds by replacing O(n²) algorithms with O(n log n)
hash-indexed lookups.
Fixes#134
## Problem
Three phases had O(n²) complexity due to nested loops:
| Phase | Operation | Before (2768 chapters) |
|-------|-----------|------------------------|
| OPF Pass | For each spine ref, scan all manifest items | ~25 min |
| TOC Pass | For each TOC entry, scan all spine items | ~5 min |
| buildBookBin | For each spine item, scan ZIP central directory | ~8.4
min |
Total: **~30+ minutes** for first-time indexing of large EPUBs.
## Solution
Replace linear scans with sorted hash indexes + binary search:
- **OPF Pass**: Build `{hash(id), len, offset}` index from manifest,
binary search for each spine ref
- **TOC Pass**: Build `{hash(href), len, spineIndex}` index from spine,
binary search for each TOC entry
- **buildBookBin**: New `ZipFile::fillUncompressedSizes()` API - single
ZIP central directory scan with batch hash matching
All indexes use FNV-1a hashing with length as secondary key to minimize
collisions. Indexes are freed immediately after each phase.
## Results
**Shadow Slave EPUB (2768 chapters):**
| Phase | Before | After | Speedup |
|-------|--------|-------|---------|
| OPF pass | ~25 min | 10.8 sec | ~140x |
| TOC pass | ~5 min | 4.7 sec | ~60x |
| buildBookBin | 506 sec | 34.6 sec | ~15x |
| **Total** | **~30+ min** | **~50 sec** | **~36x** |
**Normal EPUB (87 chapters):** 1.7 sec - no regression.
## Memory
Peak temporary memory during indexing:
- OPF index: ~33KB (2770 items × 12 bytes)
- TOC index: ~33KB (2768 items × 12 bytes)
- ZIP batch: ~44KB (targets + sizes arrays)
All indexes cleared immediately after each phase. No OOM risk on
ESP32-C3.
## Note on Threshold
All optimizations are gated by `LARGE_SPINE_THRESHOLD = 400` to preserve
existing behavior for small books. However, the algorithms work
correctly for any book size and are faster even for small books:
| Book Size | Old O(n²) | New O(n log n) | Improvement |
|-----------|-----------|----------------|-------------|
| 10 ch | 100 ops | 50 ops | 2x |
| 100 ch | 10K ops | 800 ops | 12x |
| 400 ch | 160K ops | 4K ops | 40x |
If preferred, the threshold could be removed to use the optimized path
universally.
## Testing
- [x] Shadow Slave (2768 chapters): 50s first-time indexing, loads and
navigates correctly
- [x] Normal book (87 chapters): 1.7s indexing, no regression
- [x] Build passes
- [x] clang-format passes
## Files Changed
- `lib/Epub/Epub/parsers/ContentOpfParser.h/.cpp` - OPF manifest index
- `lib/Epub/Epub/BookMetadataCache.h/.cpp` - TOC index + batch size
lookup
- `lib/ZipFile/ZipFile.h/.cpp` - New `fillUncompressedSizes()` API
- `lib/Epub/Epub.cpp` - Timing logs
<details>
<summary><b>Algorithm Details</b> (click to expand)</summary>
### Phase 1: OPF Pass - Manifest to Spine Lookup
**Problem**: Each `<itemref idref="ch001">` in spine must find matching
`<item id="ch001" href="...">` in manifest.
```
OLD: For each of 2768 spine refs, scan all 2770 manifest items
= 7.6M string comparisons
NEW: While parsing manifest, build index:
{ hash("ch001"), len=5, file_offset=120 }
Sort index, then binary search for each spine ref:
2768 × log₂(2770) ≈ 2768 × 11 = 30K comparisons
```
### Phase 2: TOC Pass - TOC Entry to Spine Index Lookup
**Problem**: Each TOC entry with `href="chapter0001.xhtml"` must find
its spine index.
```
OLD: For each of 2768 TOC entries, scan all 2768 spine entries
= 7.6M string comparisons
NEW: At beginTocPass(), read spine once and build index:
{ hash("OEBPS/chapter0001.xhtml"), len=25, spineIndex=0 }
Sort index, binary search for each TOC entry:
2768 × log₂(2768) ≈ 30K comparisons
Clear index at endTocPass() to free memory.
```
### Phase 3: buildBookBin - ZIP Size Lookup
**Problem**: Need uncompressed file size for each spine item (for
reading progress). Sizes are in ZIP central directory.
```
OLD: For each of 2768 spine items, scan ZIP central directory (2773 entries)
= 7.6M filename reads + string comparisons
Time: 506 seconds
NEW:
Step 1: Build targets from spine
{ hash("OEBPS/chapter0001.xhtml"), len=25, index=0 }
Sort by (hash, len)
Step 2: Single pass through ZIP central directory
For each entry:
- Compute hash ON THE FLY (no string allocation)
- Binary search targets
- If match: sizes[target.index] = uncompressedSize
Step 3: Use sizes array directly (O(1) per spine item)
Total: 2773 entries × log₂(2768) ≈ 33K comparisons
Time: 35 seconds
```
### Why Hash + Length?
Using 64-bit FNV-1a hash + string length as a composite key:
- Collision probability: ~1 in 2⁶⁴ × typical_path_lengths
- No string storage needed in index (just 12-16 bytes per entry)
- Integer comparisons are faster than string comparisons
- Verification on match handles the rare collision case
</details>
---
_AI-assisted development. All changes tested on hardware._
* origin:
fix: truncate chapter names that are too long (#422)
feat: dict based Hyphenation (#305)
fix: render U+FFFD replacement character instead of ? (#366)
fix: Invert colors on home screen cover overlay when recent book is selected (#390)
Adds KOReader Sync support (#232)
feat: Change keyboard "caps" to "shift" & Wrap Keyboard (#377)
fix: XTC 1-bit thumb BMP polarity inversion (#373)
## Summary
* Adds (optional) Hyphenation for English, French, German, Russian
languages
## Additional Context
* Included hyphenation dictionaries add approximately 280kb to the flash
usage (German alone takes 200kb)
* Trie encoded dictionaries are adopted from hypher project
(https://github.com/typst/hypher)
* Soft hyphens (and other explicit hyphens) take precedence over
dict-based hyphenation. Overall, the hyphenation rules are quite
aggressive, as I believe it makes more sense on our smaller screen.
---------
Co-authored-by: Dave Allie <dave@daveallie.com>
## Summary
- Nav file in EPUB 3 file is a HTML file with relative hrefs
- If this file exists anywhere but in the same location as the
content.opf file, navigating in the book will fail
- Bump the book cache version to rebuild potentially broken books
## Additional Context
- Fixes https://github.com/daveallie/crosspoint-reader/issues/264
---
### AI Usage
While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.
Did you use AI tools to help write this code?
- [ ] Yes
- [ ] Partially
- [x] No
## Summary
* **What is the goal of this PR?** Add EPUB 3 support by implementing
native navigation document (nav.xhtml) parsing with NCX fallback,
addressing issue Fixes: #143.
* **What changes are included?**
- New `TocNavParser` for parsing EPUB 3 HTML5 navigation documents
(`<nav epub:type="toc">`)
- Detection of nav documents via `properties="nav"` attribute in OPF
manifest
- Fallback logic: try EPUB 3 nav first, fall back to NCX (EPUB 2) if
unavailable
- Graceful degradation: books without any TOC now load with a warning
instead of failing
## Additional Context
* The implementation follows the existing streaming XML parser pattern
using Expat to minimize RAM usage on the ESP32-C3
* EPUB 3 books that include both nav.xhtml and toc.ncx will prefer the
nav document (per EPUB 3 spec recommendation)
* No breaking changes - existing EPUB 2 books continue to work as before
* Tested on examples from
https://idpf.github.io/epub3-samples/30/samples.html
This parses the guide section in the content.opf for text/start
references and jumps to this on first open of the book.
Currently, this behavior will be repeated in case the reader manually
jumps to Chapter 0 and then re-opens the book. IMO, this is an
acceptable edge case (for which I couldn't see a good fix other than to
drag a "first open" boolean around).
---------
Co-authored-by: Sam Davis <sam@sjd.co>
Co-authored-by: Dave Allie <dave@daveallie.com>
## Summary
* Swap to updated SDCardManager which uses SdFat
* Add exFAT support
* Swap to using FsFile everywhere
* Use newly exposed `SdMan` macro to get to static instance of
SDCardManager
* Move a bunch of FsHelpers up to SDCardManager
## Summary
* Use single unified cache file for book spine, table of contents, and
core metadata (title, author, cover image)
* Use new temp item store file in OPF parsing to store items to be
rescaned when parsing spine
* This avoids us holding these items in memory
* Use new toc.bin.tmp and spine.bin.tmp to build out partial toc / spine
data as part of parsing content.opf and the NCX file
* These files are re-read multiple times to ultimately build book.bin
## Additional Context
* Spec for file format included below as an image
* This should help with:
* #10
* #60
* #99
## Summary
* Rely on media-type="application/x-dtbncx+xml" to find TOC instead of
hardcoded values
## Additional Context
* Most of my epubs don't have id==ncx for toc file location. I think
this media-type is EPUB standard
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>