Compare commits

...

9 Commits

Author SHA1 Message Date
Daniel Chelling
e352b82cb7
Merge 06ced8f2d1 into 3ce11f14ce 2026-01-21 16:39:43 +01:00
Dave Allie
3ce11f14ce
chore: Cut release 0.15.0
Some checks failed
CI / build (push) Has been cancelled
2026-01-22 02:20:22 +11:00
KasyanDiGris
47ef92e8fd
fix: OPDS browser OOM (#403)
## Summary

- Rewrite OpdsParser to stream parsing instead of full content
- Fix OOM due to big http xml response

Closes #385 

---

### AI Usage

While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.

Did you use AI tools to help write this code? _**NO**_
2026-01-22 01:43:51 +11:00
Juan Biondi
e3d6e32609
docs: Add detailed webserver documentation (#446)
## More detailed documentation

* **What is the goal of this PR?** 
Add more information about the exposed webserver.
* **What changes are included?**
Detailed documentation for the webserver endpoints
(`./docs/webserver-endpoints.md`)
Adding a table of content so it is easier to navigate directly to the
section you're interested on (Almost all `.md` files or at least all
those relevant)

## Additional Context

Not sure if this would get accepted but I thought it might be useful for
those trying to create separate apps that would sync files to the
device. It was at least to me trying to upload files using python as
stated
[here](https://github.com/crosspoint-reader/crosspoint-reader/discussions/434#discussioncomment-15545349)

---

### AI Usage

While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.

Did you use AI tools to help write this code? _**PARTIALLY**_
2026-01-22 00:29:39 +11:00
Logan Garbarini
d399afb53d
feat: invalidate cache on web uploads and opds downloads and add Clear Cache action (#393)
## Summary

When uploading or downloading an updated ebook from SD/WebUI/OPDS with
same the filename the `.crosspoint` cache is not cleared. This can lead
to issues with the Table of Contents and hangs when switching between
chapters.

I encountered this issue in two places:
- When I need to do further ePub cleaning using Calibre after I load an
ePub and find that some of its formatting should be cleaned up. When I
reprocess the same book and want to place it back in the same location I
need a way to invalidate the cache.
- When syncing RSS feed generated epubs. I generate news ePubs with
filenames like `news-outlet.epub` and so every day when I fetch new news
the crosspoint cache needs to be cleared to load that file.

This change offers the following features:
- On web uploads, if the file already exists, the cache for that file is
cleared
- On OPDS downloads, if the file already exists, the cache for that file
is cleared
- There's now an action for `Clear Cache` in the Settings page which can
clear the cache for all books


Addresses
https://github.com/crosspoint-reader/crosspoint-reader/issues/281

---

### AI Usage

While CrossPoint doesn't have restrictions on AI tools in contributing,
please be transparent about their usage as it
helps set the right context for reviewers.

Did you use AI tools to help write this code? PARTIALLY

---------

Co-authored-by: Dave Allie <dave@daveallie.com>
2026-01-22 00:06:07 +11:00
Daniel
06ced8f2d1 perf: optimize large EPUB indexing from O(n²) to O(n log n)
Three optimizations for EPUBs with many chapters (e.g. 2768 chapters):

1. OPF idref→href lookup: Build sorted hash index during manifest parsing,
   use binary search during spine resolution. Reduces ~4min to ~30-60s.

2. TOC href→spineIndex lookup: Build sorted hash index in beginTocPass(),
   use binary search in createTocEntry(). Reduces ~4min to ~30-60s.

3. ZIP central-dir cursor: Resume scanning from last position instead of
   restarting from beginning. Reduces ~8min to ~1-3min.

All optimizations only activate for large EPUBs (≥400 spine items).
Small books use unchanged code paths.

Memory impact: ~33KB + ~39KB temporary during indexing, freed after.
Expected total: ~17min → ~3-5min for Shadow Slave (2768 chapters).

Also adds phase timing logs for performance measurement.
2026-01-20 23:35:54 -08:00
Daniel
702a7e7a00 fix: remove OOM-causing hrefToSpineIndex map from TOC pass
The unordered_map with 2768 string keys (~100KB+) was causing OOM crashes
at beginTocPass() on ESP32-C3's limited ~380KB RAM.

Reverted createTocEntry() to use original O(n) spine file scan instead.
Kept the safe spineToTocIndex vector in buildBookBin() (only ~5.5KB).
2026-01-20 21:14:05 -08:00
Daniel
33c6651044 perf: optimize large EPUB indexing from O(n²) to O(n)
Replace O(n²) lookups with O(n) preprocessing:

1. createTocEntry(): Build href->spineIndex map once in beginTocPass()
   instead of scanning spine file for every TOC entry

2. buildBookBin(): Build spineIndex->tocIndex vector in single pass
   instead of scanning TOC file for every spine entry

For 2768-chapter EPUBs, this reduces:
- TOC pass: from ~7.6M file reads to ~5.5K reads
- buildBookBin: from ~7.6M file reads to ~5.5K reads

Memory impact: ~80KB for href map (acceptable trade-off for 10x+ speedup)
2026-01-20 21:14:05 -08:00
Daniel
5f34388143 fix: prevent OOM crash when loading large EPUBs (2000+ chapters)
Remove the call to loadAllFileStatSlims() which pre-loads all ZIP central
directory entries into memory. For EPUBs with 2000+ chapters (like webnovels),
this exhausts the ESP32-C3's ~380KB RAM and causes abort().

The existing loadFileStatSlim() function already handles individual lookups
by scanning the central directory per-file when the cache is empty. This is
O(n*m) instead of O(n), but prevents memory exhaustion.

Fixes #134
2026-01-20 12:34:51 -08:00
26 changed files with 1043 additions and 180 deletions

View File

@ -2,6 +2,27 @@
Welcome to the **CrossPoint** firmware. This guide outlines the hardware controls, navigation, and reading features of the device.
- [CrossPoint User Guide](#crosspoint-user-guide)
- [1. Hardware Overview](#1-hardware-overview)
- [Button Layout](#button-layout)
- [2. Power \& Startup](#2-power--startup)
- [Power On / Off](#power-on--off)
- [First Launch](#first-launch)
- [3. Screens](#3-screens)
- [3.1 Home Screen](#31-home-screen)
- [3.2 Book Selection](#32-book-selection)
- [3.3 Reading Mode](#33-reading-mode)
- [3.4 File Upload Screen](#34-file-upload-screen)
- [3.5 Settings](#35-settings)
- [3.6 Sleep Screen](#36-sleep-screen)
- [4. Reading Mode](#4-reading-mode)
- [Page Turning](#page-turning)
- [Chapter Navigation](#chapter-navigation)
- [System Navigation](#system-navigation)
- [5. Chapter Selection Screen](#5-chapter-selection-screen)
- [6. Current Limitations \& Roadmap](#6-current-limitations--roadmap)
## 1. Hardware Overview
The device utilises the standard buttons on the Xtink X4 (in the same layout as the manufacturer firmware, by default):

57
docs/troubleshooting.md Normal file
View File

@ -0,0 +1,57 @@
# Troubleshooting
This document show most common issues and possible solutions while using the device features.
- [Troubleshooting](#troubleshooting)
- [Cannot See the Device on the Network](#cannot-see-the-device-on-the-network)
- [Connection Drops or Times Out](#connection-drops-or-times-out)
- [Upload Fails](#upload-fails)
- [Saved Password Not Working](#saved-password-not-working)
### Cannot See the Device on the Network
**Problem:** Browser shows "Cannot connect" or "Site can't be reached"
**Solutions:**
1. Verify both devices are on the **same WiFi network**
- Check your computer/phone WiFi settings
- Confirm the CrossPoint Reader shows "Connected" status
2. Double-check the IP address
- Make sure you typed it correctly
- Include `http://` at the beginning
3. Try disabling VPN if you're using one
4. Some networks have "client isolation" enabled - check with your network administrator
### Connection Drops or Times Out
**Problem:** WiFi connection is unstable
**Solutions:**
1. Move closer to the WiFi router
2. Check signal strength on the device (should be at least `||` or better)
3. Avoid interference from other devices
4. Try a different WiFi network if available
### Upload Fails
**Problem:** File upload doesn't complete or shows an error
**Solutions:**
1. Ensure the file is a valid `.epub` file
2. Check that the SD card has enough free space
3. Try uploading a smaller file first to test
4. Refresh the browser page and try again
### Saved Password Not Working
**Problem:** Device fails to connect with saved credentials
**Solutions:**
1. When connection fails, you'll be prompted to "Forget Network"
2. Select **Yes** to remove the saved password
3. Reconnect and enter the password again
4. Choose to save the new password

331
docs/webserver-endpoints.md Normal file
View File

@ -0,0 +1,331 @@
# Webserver Endpoints
This document describes all HTTP and WebSocket endpoints available on the CrossPoint Reader webserver.
- [Webserver Endpoints](#webserver-endpoints)
- [Overview](#overview)
- [HTTP Endpoints](#http-endpoints)
- [GET `/` - Home Page](#get----home-page)
- [GET `/files` - File Browser Page](#get-files---file-browser-page)
- [GET `/api/status` - Device Status](#get-apistatus---device-status)
- [GET `/api/files` - List Files](#get-apifiles---list-files)
- [POST `/upload` - Upload File](#post-upload---upload-file)
- [POST `/mkdir` - Create Folder](#post-mkdir---create-folder)
- [POST `/delete` - Delete File or Folder](#post-delete---delete-file-or-folder)
- [WebSocket Endpoint](#websocket-endpoint)
- [Port 81 - Fast Binary Upload](#port-81---fast-binary-upload)
- [Network Modes](#network-modes)
- [Station Mode (STA)](#station-mode-sta)
- [Access Point Mode (AP)](#access-point-mode-ap)
- [Notes](#notes)
## Overview
The CrossPoint Reader exposes a webserver for file management and device monitoring:
- **HTTP Server**: Port 80
- **WebSocket Server**: Port 81 (for fast binary uploads)
---
## HTTP Endpoints
### GET `/` - Home Page
Serves the home page HTML interface.
**Request:**
```bash
curl http://crosspoint.local/
```
**Response:** HTML page (200 OK)
---
### GET `/files` - File Browser Page
Serves the file browser HTML interface.
**Request:**
```bash
curl http://crosspoint.local/files
```
**Response:** HTML page (200 OK)
---
### GET `/api/status` - Device Status
Returns JSON with device status information.
**Request:**
```bash
curl http://crosspoint.local/api/status
```
**Response (200 OK):**
```json
{
"version": "1.0.0",
"ip": "192.168.1.100",
"mode": "STA",
"rssi": -45,
"freeHeap": 123456,
"uptime": 3600
}
```
| Field | Type | Description |
| ---------- | ------ | --------------------------------------------------------- |
| `version` | string | CrossPoint firmware version |
| `ip` | string | Device IP address |
| `mode` | string | `"STA"` (connected to WiFi) or `"AP"` (access point mode) |
| `rssi` | number | WiFi signal strength in dBm (0 in AP mode) |
| `freeHeap` | number | Free heap memory in bytes |
| `uptime` | number | Seconds since device boot |
---
### GET `/api/files` - List Files
Returns a JSON array of files and folders in the specified directory.
**Request:**
```bash
# List root directory
curl http://crosspoint.local/api/files
# List specific directory
curl "http://crosspoint.local/api/files?path=/Books"
```
**Query Parameters:**
| Parameter | Required | Default | Description |
| --------- | -------- | ------- | ---------------------- |
| `path` | No | `/` | Directory path to list |
**Response (200 OK):**
```json
[
{"name": "MyBook.epub", "size": 1234567, "isDirectory": false, "isEpub": true},
{"name": "Notes", "size": 0, "isDirectory": true, "isEpub": false},
{"name": "document.pdf", "size": 54321, "isDirectory": false, "isEpub": false}
]
```
| Field | Type | Description |
| ------------- | ------- | ---------------------------------------- |
| `name` | string | File or folder name |
| `size` | number | Size in bytes (0 for directories) |
| `isDirectory` | boolean | `true` if the item is a folder |
| `isEpub` | boolean | `true` if the file has `.epub` extension |
**Notes:**
- Hidden files (starting with `.`) are automatically filtered out
- System folders (`System Volume Information`, `XTCache`) are hidden
---
### POST `/upload` - Upload File
Uploads a file to the SD card via multipart form data.
**Request:**
```bash
# Upload to root directory
curl -X POST -F "file=@mybook.epub" http://crosspoint.local/upload
# Upload to specific directory
curl -X POST -F "file=@mybook.epub" "http://crosspoint.local/upload?path=/Books"
```
**Query Parameters:**
| Parameter | Required | Default | Description |
| --------- | -------- | ------- | ------------------------------- |
| `path` | No | `/` | Target directory for the upload |
**Response (200 OK):**
```
File uploaded successfully: mybook.epub
```
**Error Responses:**
| Status | Body | Cause |
| ------ | ----------------------------------------------- | --------------------------- |
| 400 | `Failed to create file on SD card` | Cannot create file |
| 400 | `Failed to write to SD card - disk may be full` | Write error during upload |
| 400 | `Failed to write final data to SD card` | Error flushing final buffer |
| 400 | `Upload aborted` | Client aborted the upload |
| 400 | `Unknown error during upload` | Unspecified error |
**Notes:**
- Existing files with the same name will be overwritten
- Uses a 4KB buffer for efficient SD card writes
---
### POST `/mkdir` - Create Folder
Creates a new folder on the SD card.
**Request:**
```bash
curl -X POST -d "name=NewFolder&path=/" http://crosspoint.local/mkdir
```
**Form Parameters:**
| Parameter | Required | Default | Description |
| --------- | -------- | ------- | ---------------------------- |
| `name` | Yes | - | Name of the folder to create |
| `path` | No | `/` | Parent directory path |
**Response (200 OK):**
```
Folder created: NewFolder
```
**Error Responses:**
| Status | Body | Cause |
| ------ | ----------------------------- | ----------------------------- |
| 400 | `Missing folder name` | `name` parameter not provided |
| 400 | `Folder name cannot be empty` | Empty folder name |
| 400 | `Folder already exists` | Folder with same name exists |
| 500 | `Failed to create folder` | SD card error |
---
### POST `/delete` - Delete File or Folder
Deletes a file or folder from the SD card.
**Request:**
```bash
# Delete a file
curl -X POST -d "path=/Books/mybook.epub&type=file" http://crosspoint.local/delete
# Delete an empty folder
curl -X POST -d "path=/OldFolder&type=folder" http://crosspoint.local/delete
```
**Form Parameters:**
| Parameter | Required | Default | Description |
| --------- | -------- | ------- | -------------------------------- |
| `path` | Yes | - | Path to the item to delete |
| `type` | No | `file` | Type of item: `file` or `folder` |
**Response (200 OK):**
```
Deleted successfully
```
**Error Responses:**
| Status | Body | Cause |
| ------ | --------------------------------------------- | ----------------------------- |
| 400 | `Missing path` | `path` parameter not provided |
| 400 | `Cannot delete root directory` | Attempted to delete `/` |
| 400 | `Folder is not empty. Delete contents first.` | Non-empty folder |
| 403 | `Cannot delete system files` | Hidden file (starts with `.`) |
| 403 | `Cannot delete protected items` | Protected system folder |
| 404 | `Item not found` | Path does not exist |
| 500 | `Failed to delete item` | SD card error |
**Protected Items:**
- Files/folders starting with `.`
- `System Volume Information`
- `XTCache`
---
## WebSocket Endpoint
### Port 81 - Fast Binary Upload
A WebSocket endpoint for high-speed binary file uploads. More efficient than HTTP multipart for large files.
**Connection:**
```
ws://crosspoint.local:81/
```
**Protocol:**
1. **Client** sends TEXT message: `START:<filename>:<size>:<path>`
2. **Server** responds with TEXT: `READY`
3. **Client** sends BINARY messages with file data chunks
4. **Server** sends TEXT progress updates: `PROGRESS:<received>:<total>`
5. **Server** sends TEXT when complete: `DONE` or `ERROR:<message>`
**Example Session:**
```
Client -> "START:mybook.epub:1234567:/Books"
Server -> "READY"
Client -> [binary chunk 1]
Client -> [binary chunk 2]
Server -> "PROGRESS:65536:1234567"
Client -> [binary chunk 3]
...
Server -> "PROGRESS:1234567:1234567"
Server -> "DONE"
```
**Error Messages:**
| Message | Cause |
| --------------------------------- | ---------------------------------- |
| `ERROR:Failed to create file` | Cannot create file on SD card |
| `ERROR:Invalid START format` | Malformed START message |
| `ERROR:No upload in progress` | Binary data received without START |
| `ERROR:Write failed - disk full?` | SD card write error |
**Example with `websocat`:**
```bash
# Interactive session
websocat ws://crosspoint.local:81
# Then type:
START:mybook.epub:1234567:/Books
# Wait for READY, then send binary data
```
**Notes:**
- Progress updates are sent every 64KB or at completion
- Disconnection during upload will delete the incomplete file
- Existing files with the same name will be overwritten
---
## Network Modes
The device can operate in two network modes:
### Station Mode (STA)
- Device connects to an existing WiFi network
- IP address assigned by router/DHCP
- `mode` field in `/api/status` returns `"STA"`
- `rssi` field shows signal strength
### Access Point Mode (AP)
- Device creates its own WiFi hotspot
- Default IP is typically `192.168.4.1`
- `mode` field in `/api/status` returns `"AP"`
- `rssi` field returns `0`
---
## Notes
- These examples use `crosspoint.local`. If your network does not support mDNS or the address does not resolve, replace it with the specific **IP Address** displayed on your device screen (e.g., `http://192.168.1.102/`).
- All paths on the SD card start with `/`
- Trailing slashes are automatically stripped (except for root `/`)
- The webserver uses chunked transfer encoding for file listings

View File

@ -172,89 +172,7 @@ This is useful for organizing your ebooks by genre, author, or series.
## Command Line File Management
For power users, you can manage files directly from your terminal using `curl` while the device is in File Upload mode.
### Uploading a File
To upload a file to the root directory, use the following command:
```bash
curl -F "file=@book.epub" "http://crosspoint.local/upload?path=/"
```
* **`-F "file=@filename"`**: Points to the local file on your computer.
* **`path=/`**: The destination folder on the device SD card.
### Deleting a File
To delete a specific file, provide the full path on the SD card:
```bash
curl -F "path=/folder/file.epub" "http://crosspoint.local/delete"
```
### Advanced Flags
For more reliable transfers of large EPUB files, consider adding these flags:
* `-#`: Shows a simple progress bar.
* `--connect-timeout 30`: Limits how long curl waits to establish a connection (in seconds).
* `--max-time 300`: Sets a maximum duration for the entire transfer (5 minutes).
> [!NOTE]
> These examples use `crosspoint.local`. If your network does not support mDNS or the address does not resolve, replace it with the specific **IP Address** displayed on your device screen (e.g., `http://192.168.1.102/`).
---
## Troubleshooting
### Cannot See the Device on the Network
**Problem:** Browser shows "Cannot connect" or "Site can't be reached"
**Solutions:**
1. Verify both devices are on the **same WiFi network**
- Check your computer/phone WiFi settings
- Confirm the CrossPoint Reader shows "Connected" status
2. Double-check the IP address
- Make sure you typed it correctly
- Include `http://` at the beginning
3. Try disabling VPN if you're using one
4. Some networks have "client isolation" enabled - check with your network administrator
### Connection Drops or Times Out
**Problem:** WiFi connection is unstable
**Solutions:**
1. Move closer to the WiFi router
2. Check signal strength on the device (should be at least `||` or better)
3. Avoid interference from other devices
4. Try a different WiFi network if available
### Upload Fails
**Problem:** File upload doesn't complete or shows an error
**Solutions:**
1. Ensure the file is a valid `.epub` file
2. Check that the SD card has enough free space
3. Try uploading a smaller file first to test
4. Refresh the browser page and try again
### Saved Password Not Working
**Problem:** Device fails to connect with saved credentials
**Solutions:**
1. When connection fails, you'll be prompted to "Forget Network"
2. Select **Yes** to remove the saved password
3. Reconnect and enter the password again
4. Choose to save the new password
---
For power users, you can manage files directly from your terminal using `curl` while the device is in File Upload mode a detailed documentation can be found [here](./webserver-endpoints.md).
## Security Notes
@ -303,4 +221,5 @@ Your uploaded files will be immediately available in the file browser!
## Related Documentation
- [User Guide](../USER_GUIDE.md) - General device operation
- [Troubleshooting](./troubleshooting.md) - Troubleshooting
- [README](../README.md) - Project overview and features

View File

@ -226,6 +226,8 @@ bool Epub::load(const bool buildIfMissing) {
Serial.printf("[%lu] [EBP] Cache not found, building spine/TOC cache\n", millis());
setupCacheDir();
const uint32_t indexingStart = millis();
// Begin building cache - stream entries to disk immediately
if (!bookMetadataCache->beginWrite()) {
Serial.printf("[%lu] [EBP] Could not begin writing cache\n", millis());
@ -233,6 +235,7 @@ bool Epub::load(const bool buildIfMissing) {
}
// OPF Pass
const uint32_t opfStart = millis();
BookMetadataCache::BookMetadata bookMetadata;
if (!bookMetadataCache->beginContentOpfPass()) {
Serial.printf("[%lu] [EBP] Could not begin writing content.opf pass\n", millis());
@ -246,8 +249,10 @@ bool Epub::load(const bool buildIfMissing) {
Serial.printf("[%lu] [EBP] Could not end writing content.opf pass\n", millis());
return false;
}
Serial.printf("[%lu] [EBP] OPF pass completed in %lu ms\n", millis(), millis() - opfStart);
// TOC Pass - try EPUB 3 nav first, fall back to NCX
const uint32_t tocStart = millis();
if (!bookMetadataCache->beginTocPass()) {
Serial.printf("[%lu] [EBP] Could not begin writing toc pass\n", millis());
return false;
@ -276,6 +281,7 @@ bool Epub::load(const bool buildIfMissing) {
Serial.printf("[%lu] [EBP] Could not end writing toc pass\n", millis());
return false;
}
Serial.printf("[%lu] [EBP] TOC pass completed in %lu ms\n", millis(), millis() - tocStart);
// Close the cache files
if (!bookMetadataCache->endWrite()) {
@ -284,10 +290,13 @@ bool Epub::load(const bool buildIfMissing) {
}
// Build final book.bin
const uint32_t buildStart = millis();
if (!bookMetadataCache->buildBookBin(filepath, bookMetadata)) {
Serial.printf("[%lu] [EBP] Could not update mappings and sizes\n", millis());
return false;
}
Serial.printf("[%lu] [EBP] buildBookBin completed in %lu ms\n", millis(), millis() - buildStart);
Serial.printf("[%lu] [EBP] Total indexing completed in %lu ms\n", millis(), millis() - indexingStart);
if (!bookMetadataCache->cleanupTmpFiles()) {
Serial.printf("[%lu] [EBP] Could not cleanup tmp files - ignoring\n", millis());

View File

@ -40,7 +40,6 @@ bool BookMetadataCache::endContentOpfPass() {
bool BookMetadataCache::beginTocPass() {
Serial.printf("[%lu] [BMC] Beginning toc pass\n", millis());
// Open spine file for reading
if (!SdMan.openFileForRead("BMC", cachePath + tmpSpineBinFile, spineFile)) {
return false;
}
@ -48,12 +47,41 @@ bool BookMetadataCache::beginTocPass() {
spineFile.close();
return false;
}
if (spineCount >= LARGE_SPINE_THRESHOLD) {
spineHrefIndex.clear();
spineHrefIndex.reserve(spineCount);
spineFile.seek(0);
for (int i = 0; i < spineCount; i++) {
auto entry = readSpineEntry(spineFile);
SpineHrefIndexEntry idx;
idx.hrefHash = fnvHash64(entry.href);
idx.hrefLen = static_cast<uint16_t>(entry.href.size());
idx.spineIndex = static_cast<int16_t>(i);
spineHrefIndex.push_back(idx);
}
std::sort(spineHrefIndex.begin(), spineHrefIndex.end(),
[](const SpineHrefIndexEntry& a, const SpineHrefIndexEntry& b) {
return a.hrefHash < b.hrefHash || (a.hrefHash == b.hrefHash && a.hrefLen < b.hrefLen);
});
spineFile.seek(0);
useSpineHrefIndex = true;
Serial.printf("[%lu] [BMC] Using fast index for %d spine items\n", millis(), spineCount);
} else {
useSpineHrefIndex = false;
}
return true;
}
bool BookMetadataCache::endTocPass() {
tocFile.close();
spineFile.close();
spineHrefIndex.clear();
spineHrefIndex.shrink_to_fit();
useSpineHrefIndex = false;
return true;
}
@ -124,6 +152,18 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
// LUTs complete
// Loop through spines from spine file matching up TOC indexes, calculating cumulative size and writing to book.bin
// Build spineIndex->tocIndex mapping in one pass (O(n) instead of O(n*m))
std::vector<int16_t> spineToTocIndex(spineCount, -1);
tocFile.seek(0);
for (int j = 0; j < tocCount; j++) {
auto tocEntry = readTocEntry(tocFile);
if (tocEntry.spineIndex >= 0 && tocEntry.spineIndex < spineCount) {
if (spineToTocIndex[tocEntry.spineIndex] == -1) {
spineToTocIndex[tocEntry.spineIndex] = static_cast<int16_t>(j);
}
}
}
ZipFile zip(epubPath);
// Pre-open zip file to speed up size calculations
if (!zip.open()) {
@ -133,31 +173,19 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
tocFile.close();
return false;
}
// TODO: For large ZIPs loading the all localHeaderOffsets will crash.
// However not having them loaded is extremely slow. Need a better solution here.
// Perhaps only a cache of spine items or a better way to speedup lookups?
if (!zip.loadAllFileStatSlims()) {
Serial.printf("[%lu] [BMC] Could not load zip local header offsets for size calculations\n", millis());
bookFile.close();
spineFile.close();
tocFile.close();
zip.close();
return false;
}
// NOTE: We intentionally skip calling loadAllFileStatSlims() here.
// For large EPUBs (2000+ chapters), pre-loading all ZIP central directory entries
// into memory causes OOM crashes on ESP32-C3's limited ~380KB RAM.
// Instead, we let loadFileStatSlim() do individual lookups per spine item.
// This is O(n*m) instead of O(n) for lookups, but avoids memory exhaustion.
// See: https://github.com/crosspoint-reader/crosspoint-reader/issues/134
uint32_t cumSize = 0;
spineFile.seek(0);
int lastSpineTocIndex = -1;
for (int i = 0; i < spineCount; i++) {
auto spineEntry = readSpineEntry(spineFile);
tocFile.seek(0);
for (int j = 0; j < tocCount; j++) {
auto tocEntry = readTocEntry(tocFile);
if (tocEntry.spineIndex == i) {
spineEntry.tocIndex = j;
break;
}
}
spineEntry.tocIndex = spineToTocIndex[i];
// Not a huge deal if we don't fine a TOC entry for the spine entry, this is expected behaviour for EPUBs
// Logging here is for debugging
@ -248,21 +276,38 @@ void BookMetadataCache::createTocEntry(const std::string& title, const std::stri
return;
}
int spineIndex = -1;
// find spine index
// TODO: This lookup is slow as need to scan through all items each time. We can't hold it all in memory due to size.
// But perhaps we can load just the hrefs in a vector/list to do an index lookup?
spineFile.seek(0);
for (int i = 0; i < spineCount; i++) {
auto spineEntry = readSpineEntry(spineFile);
if (spineEntry.href == href) {
spineIndex = i;
int16_t spineIndex = -1;
if (useSpineHrefIndex) {
uint64_t targetHash = fnvHash64(href);
uint16_t targetLen = static_cast<uint16_t>(href.size());
auto it = std::lower_bound(spineHrefIndex.begin(), spineHrefIndex.end(),
SpineHrefIndexEntry{targetHash, targetLen, 0},
[](const SpineHrefIndexEntry& a, const SpineHrefIndexEntry& b) {
return a.hrefHash < b.hrefHash || (a.hrefHash == b.hrefHash && a.hrefLen < b.hrefLen);
});
while (it != spineHrefIndex.end() && it->hrefHash == targetHash && it->hrefLen == targetLen) {
spineIndex = it->spineIndex;
break;
}
}
if (spineIndex == -1) {
Serial.printf("[%lu] [BMC] addTocEntry: Could not find spine item for TOC href %s\n", millis(), href.c_str());
if (spineIndex == -1) {
Serial.printf("[%lu] [BMC] createTocEntry: Could not find spine item for TOC href %s\n", millis(), href.c_str());
}
} else {
spineFile.seek(0);
for (int i = 0; i < spineCount; i++) {
auto spineEntry = readSpineEntry(spineFile);
if (spineEntry.href == href) {
spineIndex = static_cast<int16_t>(i);
break;
}
}
if (spineIndex == -1) {
Serial.printf("[%lu] [BMC] createTocEntry: Could not find spine item for TOC href %s\n", millis(), href.c_str());
}
}
const TocEntry entry(title, href, anchor, level, spineIndex);

View File

@ -2,7 +2,9 @@
#include <SDCardManager.h>
#include <algorithm>
#include <string>
#include <vector>
class BookMetadataCache {
public:
@ -53,6 +55,27 @@ class BookMetadataCache {
FsFile spineFile;
FsFile tocFile;
// Index for fast href→spineIndex lookup (used only for large EPUBs)
struct SpineHrefIndexEntry {
uint64_t hrefHash; // FNV-1a 64-bit hash
uint16_t hrefLen; // length for collision reduction
int16_t spineIndex;
};
std::vector<SpineHrefIndexEntry> spineHrefIndex;
bool useSpineHrefIndex = false;
static constexpr uint16_t LARGE_SPINE_THRESHOLD = 400;
// FNV-1a 64-bit hash function
static uint64_t fnvHash64(const std::string& s) {
uint64_t hash = 14695981039346656037ull;
for (char c : s) {
hash ^= static_cast<uint8_t>(c);
hash *= 1099511628211ull;
}
return hash;
}
uint32_t writeSpineEntry(FsFile& file, const SpineEntry& entry) const;
uint32_t writeTocEntry(FsFile& file, const TocEntry& entry) const;
SpineEntry readSpineEntry(FsFile& file) const;

View File

@ -38,6 +38,9 @@ ContentOpfParser::~ContentOpfParser() {
if (SdMan.exists((cachePath + itemCacheFile).c_str())) {
SdMan.remove((cachePath + itemCacheFile).c_str());
}
itemIndex.clear();
itemIndex.shrink_to_fit();
useItemIndex = false;
}
size_t ContentOpfParser::write(const uint8_t data) { return write(&data, 1); }
@ -129,6 +132,16 @@ void XMLCALL ContentOpfParser::startElement(void* userData, const XML_Char* name
"[%lu] [COF] Couldn't open temp items file for reading. This is probably going to be a fatal error.\n",
millis());
}
// Sort item index for binary search if we have enough items
if (self->itemIndex.size() >= LARGE_SPINE_THRESHOLD) {
std::sort(self->itemIndex.begin(), self->itemIndex.end(),
[](const ItemIndexEntry& a, const ItemIndexEntry& b) {
return a.idHash < b.idHash || (a.idHash == b.idHash && a.idLen < b.idLen);
});
self->useItemIndex = true;
Serial.printf("[%lu] [COF] Using fast index for %zu manifest items\n", millis(), self->itemIndex.size());
}
return;
}
@ -180,6 +193,15 @@ void XMLCALL ContentOpfParser::startElement(void* userData, const XML_Char* name
}
}
// Record index entry for fast lookup later
if (self->tempItemStore) {
ItemIndexEntry entry;
entry.idHash = fnvHash(itemId);
entry.idLen = static_cast<uint16_t>(itemId.size());
entry.fileOffset = static_cast<uint32_t>(self->tempItemStore.position());
self->itemIndex.push_back(entry);
}
// Write items down to SD card
serialization::writeString(self->tempItemStore, itemId);
serialization::writeString(self->tempItemStore, href);
@ -215,19 +237,50 @@ void XMLCALL ContentOpfParser::startElement(void* userData, const XML_Char* name
for (int i = 0; atts[i]; i += 2) {
if (strcmp(atts[i], "idref") == 0) {
const std::string idref = atts[i + 1];
// Resolve the idref to href using items map
// TODO: This lookup is slow as need to scan through all items each time.
// It can take up to 200ms per item when getting to 1500 items.
self->tempItemStore.seek(0);
std::string itemId;
std::string href;
while (self->tempItemStore.available()) {
serialization::readString(self->tempItemStore, itemId);
serialization::readString(self->tempItemStore, href);
if (itemId == idref) {
self->cache->createSpineEntry(href);
break;
bool found = false;
if (self->useItemIndex) {
// Fast path: binary search
uint32_t targetHash = fnvHash(idref);
uint16_t targetLen = static_cast<uint16_t>(idref.size());
auto it = std::lower_bound(self->itemIndex.begin(), self->itemIndex.end(),
ItemIndexEntry{targetHash, targetLen, 0},
[](const ItemIndexEntry& a, const ItemIndexEntry& b) {
return a.idHash < b.idHash || (a.idHash == b.idHash && a.idLen < b.idLen);
});
// Check for match (may need to check a few due to hash collisions)
while (it != self->itemIndex.end() && it->idHash == targetHash) {
self->tempItemStore.seek(it->fileOffset);
std::string itemId;
serialization::readString(self->tempItemStore, itemId);
if (itemId == idref) {
serialization::readString(self->tempItemStore, href);
found = true;
break;
}
++it;
}
} else {
// Slow path: linear scan (for small manifests, keeps original behavior)
// TODO: This lookup is slow as need to scan through all items each time.
// It can take up to 200ms per item when getting to 1500 items.
self->tempItemStore.seek(0);
std::string itemId;
while (self->tempItemStore.available()) {
serialization::readString(self->tempItemStore, itemId);
serialization::readString(self->tempItemStore, href);
if (itemId == idref) {
found = true;
break;
}
}
}
if (found && self->cache) {
self->cache->createSpineEntry(href);
}
}
}

View File

@ -1,6 +1,9 @@
#pragma once
#include <Print.h>
#include <vector>
#include <algorithm>
#include "Epub.h"
#include "expat.h"
@ -28,6 +31,27 @@ class ContentOpfParser final : public Print {
FsFile tempItemStore;
std::string coverItemId;
// Index for fast idref→href lookup (used only for large EPUBs)
struct ItemIndexEntry {
uint32_t idHash; // FNV-1a hash of itemId
uint16_t idLen; // length for collision reduction
uint32_t fileOffset; // offset in .items.bin
};
std::vector<ItemIndexEntry> itemIndex;
bool useItemIndex = false;
static constexpr uint16_t LARGE_SPINE_THRESHOLD = 400;
// FNV-1a hash function
static uint32_t fnvHash(const std::string& s) {
uint32_t hash = 2166136261u;
for (char c : s) {
hash ^= static_cast<uint8_t>(c);
hash *= 16777619u;
}
return hash;
}
static void startElement(void* userData, const XML_Char* name, const XML_Char** atts);
static void characterData(void* userData, const XML_Char* s, int len);
static void endElement(void* userData, const XML_Char* name);

View File

@ -4,6 +4,14 @@
#include <cstring>
OpdsParser::OpdsParser() {
parser = XML_ParserCreate(nullptr);
if (!parser) {
errorOccured = true;
Serial.printf("[%lu] [OPDS] Couldn't allocate memory for parser\n", millis());
}
}
OpdsParser::~OpdsParser() {
if (parser) {
XML_StopParser(parser, XML_FALSE);
@ -14,13 +22,11 @@ OpdsParser::~OpdsParser() {
}
}
bool OpdsParser::parse(const char* xmlData, const size_t length) {
clear();
size_t OpdsParser::write(uint8_t c) { return write(&c, 1); }
parser = XML_ParserCreate(nullptr);
if (!parser) {
Serial.printf("[%lu] [OPDS] Couldn't allocate memory for parser\n", millis());
return false;
size_t OpdsParser::write(const uint8_t* xmlData, const size_t length) {
if (errorOccured) {
return length;
}
XML_SetUserData(parser, this);
@ -28,43 +34,48 @@ bool OpdsParser::parse(const char* xmlData, const size_t length) {
XML_SetCharacterDataHandler(parser, characterData);
// Parse in chunks to avoid large buffer allocations
const char* currentPos = xmlData;
const char* currentPos = reinterpret_cast<const char*>(xmlData);
size_t remaining = length;
constexpr size_t chunkSize = 1024;
while (remaining > 0) {
void* const buf = XML_GetBuffer(parser, chunkSize);
if (!buf) {
errorOccured = true;
Serial.printf("[%lu] [OPDS] Couldn't allocate memory for buffer\n", millis());
XML_ParserFree(parser);
parser = nullptr;
return false;
return length;
}
const size_t toRead = remaining < chunkSize ? remaining : chunkSize;
memcpy(buf, currentPos, toRead);
const bool isFinal = (remaining == toRead);
if (XML_ParseBuffer(parser, static_cast<int>(toRead), isFinal) == XML_STATUS_ERROR) {
if (XML_ParseBuffer(parser, static_cast<int>(toRead), 0) == XML_STATUS_ERROR) {
errorOccured = true;
Serial.printf("[%lu] [OPDS] Parse error at line %lu: %s\n", millis(), XML_GetCurrentLineNumber(parser),
XML_ErrorString(XML_GetErrorCode(parser)));
XML_ParserFree(parser);
parser = nullptr;
return false;
return length;
}
currentPos += toRead;
remaining -= toRead;
}
// Clean up parser
XML_ParserFree(parser);
parser = nullptr;
Serial.printf("[%lu] [OPDS] Parsed %zu entries\n", millis(), entries.size());
return true;
return length;
}
void OpdsParser::flush() {
if (XML_Parse(parser, nullptr, 0, XML_TRUE) != XML_STATUS_OK) {
errorOccured = true;
XML_ParserFree(parser);
parser = nullptr;
}
}
bool OpdsParser::error() const { return errorOccured; }
void OpdsParser::clear() {
entries.clear();
currentEntry = OpdsEntry{};

View File

@ -1,4 +1,5 @@
#pragma once
#include <Print.h>
#include <expat.h>
#include <string>
@ -42,28 +43,30 @@ using OpdsBook = OpdsEntry;
* }
* }
*/
class OpdsParser {
class OpdsParser final : public Print {
public:
OpdsParser() = default;
OpdsParser();
~OpdsParser();
// Disable copy
OpdsParser(const OpdsParser&) = delete;
OpdsParser& operator=(const OpdsParser&) = delete;
/**
* Parse an OPDS XML feed.
* @param xmlData Pointer to the XML data
* @param length Length of the XML data
* @return true if parsing succeeded, false on error
*/
bool parse(const char* xmlData, size_t length);
size_t write(uint8_t) override;
size_t write(const uint8_t*, size_t) override;
void flush() override;
bool error() const;
operator bool() { return !error(); }
/**
* Get the parsed entries (both navigation and book entries).
* @return Vector of OpdsEntry entries
*/
const std::vector<OpdsEntry>& getEntries() const { return entries; }
const std::vector<OpdsEntry>& getEntries() const& { return entries; }
std::vector<OpdsEntry> getEntries() && { return std::move(entries); }
/**
* Get only book entries (legacy compatibility).
@ -96,4 +99,6 @@ class OpdsParser {
bool inAuthor = false;
bool inAuthorName = false;
bool inId = false;
bool errorOccured = false;
};

View File

@ -0,0 +1,15 @@
#include "OpdsStream.h"
OpdsParserStream::OpdsParserStream(OpdsParser& parser) : parser(parser) {}
int OpdsParserStream::available() { return 0; }
int OpdsParserStream::peek() { abort(); }
int OpdsParserStream::read() { abort(); }
size_t OpdsParserStream::write(uint8_t c) { return parser.write(c); }
size_t OpdsParserStream::write(const uint8_t* buffer, size_t size) { return parser.write(buffer, size); }
OpdsParserStream::~OpdsParserStream() { parser.flush(); }

View File

@ -0,0 +1,23 @@
#pragma once
#include <Stream.h>
#include "OpdsParser.h"
class OpdsParserStream : public Stream {
public:
explicit OpdsParserStream(OpdsParser& parser);
// That functions are not implimented for that stream
int available() override;
int peek() override;
int read() override;
virtual size_t write(uint8_t c) override;
virtual size_t write(const uint8_t* buffer, size_t size) override;
~OpdsParserStream() override;
private:
OpdsParser& parser;
};

View File

@ -74,6 +74,10 @@ bool ZipFile::loadAllFileStatSlims() {
file.seekCur(m + k);
}
// Set cursor to start of central directory for sequential access
lastCentralDirPos = zipDetails.centralDirOffset;
lastCentralDirPosValid = true;
if (!wasOpen) {
close();
}
@ -102,15 +106,35 @@ bool ZipFile::loadFileStatSlim(const char* filename, FileStatSlim* fileStat) {
return false;
}
file.seek(zipDetails.centralDirOffset);
// Phase 1: Try scanning from cursor position first
uint32_t startPos = lastCentralDirPosValid ? lastCentralDirPos : zipDetails.centralDirOffset;
uint32_t wrapPos = zipDetails.centralDirOffset;
bool wrapped = false;
bool found = false;
file.seek(startPos);
uint32_t sig;
char itemName[256];
bool found = false;
while (file.available()) {
file.read(&sig, 4);
if (sig != 0x02014b50) break; // End of list
while (true) {
uint32_t entryStart = file.position();
if (file.read(&sig, 4) != 4 || sig != 0x02014b50) {
// End of central directory
if (!wrapped && lastCentralDirPosValid && startPos != zipDetails.centralDirOffset) {
// Wrap around to beginning
file.seek(zipDetails.centralDirOffset);
wrapped = true;
continue;
}
break;
}
// If we've wrapped and reached our start position, stop
if (wrapped && entryStart >= startPos) {
break;
}
file.seekCur(6);
file.read(&fileStat->method, 2);
@ -123,15 +147,25 @@ bool ZipFile::loadFileStatSlim(const char* filename, FileStatSlim* fileStat) {
file.read(&k, 2);
file.seekCur(8);
file.read(&fileStat->localHeaderOffset, 4);
file.read(itemName, nameLen);
itemName[nameLen] = '\0';
if (strcmp(itemName, filename) == 0) {
found = true;
break;
if (nameLen < 256) {
file.read(itemName, nameLen);
itemName[nameLen] = '\0';
if (strcmp(itemName, filename) == 0) {
// Found it! Update cursor to next entry
file.seekCur(m + k);
lastCentralDirPos = file.position();
lastCentralDirPosValid = true;
found = true;
break;
}
} else {
// Name too long, skip it
file.seekCur(nameLen);
}
// Skip the rest of this entry (extra field + comment)
// Skip extra field + comment
file.seekCur(m + k);
}
@ -253,6 +287,8 @@ bool ZipFile::close() {
if (file) {
file.close();
}
lastCentralDirPos = 0;
lastCentralDirPosValid = false;
return true;
}

View File

@ -25,6 +25,10 @@ class ZipFile {
ZipDetails zipDetails = {0, 0, false};
std::unordered_map<std::string, FileStatSlim> fileStatSlimCache;
// Cursor for sequential central-dir scanning optimization
uint32_t lastCentralDirPos = 0;
bool lastCentralDirPosValid = false;
bool loadFileStatSlim(const char* filename, FileStatSlim* fileStat);
long getDataOffset(const FileStatSlim& fileStat);
bool loadZipDetails();

View File

@ -2,7 +2,7 @@
default_envs = default
[crosspoint]
version = 0.14.0
version = 0.15.0
[base]
platform = espressif32 @ 6.12.0

View File

@ -1,7 +1,9 @@
#include "OpdsBookBrowserActivity.h"
#include <Epub.h>
#include <GfxRenderer.h>
#include <HardwareSerial.h>
#include <OpdsStream.h>
#include <WiFi.h>
#include "CrossPointSettings.h"
@ -264,23 +266,27 @@ void OpdsBookBrowserActivity::fetchFeed(const std::string& path) {
std::string url = UrlUtils::buildUrl(serverUrl, path);
Serial.printf("[%lu] [OPDS] Fetching: %s\n", millis(), url.c_str());
std::string content;
if (!HttpDownloader::fetchUrl(url, content)) {
state = BrowserState::ERROR;
errorMessage = "Failed to fetch feed";
updateRequired = true;
return;
OpdsParser parser;
{
OpdsParserStream stream{parser};
if (!HttpDownloader::fetchUrl(url, stream)) {
state = BrowserState::ERROR;
errorMessage = "Failed to fetch feed";
updateRequired = true;
return;
}
}
OpdsParser parser;
if (!parser.parse(content.c_str(), content.size())) {
if (!parser) {
state = BrowserState::ERROR;
errorMessage = "Failed to parse feed";
updateRequired = true;
return;
}
entries = parser.getEntries();
entries = std::move(parser).getEntries();
Serial.printf("[%lu] [OPDS] Found %d entries\n", millis(), entries.size());
selectorIndex = 0;
if (entries.empty()) {
@ -355,6 +361,12 @@ void OpdsBookBrowserActivity::downloadBook(const OpdsEntry& book) {
if (result == HttpDownloader::OK) {
Serial.printf("[%lu] [OPDS] Download complete: %s\n", millis(), filename.c_str());
// Invalidate any existing cache for this file to prevent stale metadata issues
Epub epub(filename, "/.crosspoint");
epub.clearCache();
Serial.printf("[%lu] [OPDS] Cleared cache for: %s\n", millis(), filename.c_str());
state = BrowserState::BROWSING;
updateRequired = true;
} else {

View File

@ -6,6 +6,7 @@
#include <cstring>
#include "CalibreSettingsActivity.h"
#include "ClearCacheActivity.h"
#include "CrossPointSettings.h"
#include "KOReaderSettingsActivity.h"
#include "MappedInputManager.h"
@ -110,6 +111,14 @@ void CategorySettingsActivity::toggleCurrentSetting() {
updateRequired = true;
}));
xSemaphoreGive(renderingMutex);
} else if (strcmp(setting.name, "Clear Cache") == 0) {
xSemaphoreTake(renderingMutex, portMAX_DELAY);
exitActivity();
enterNewActivity(new ClearCacheActivity(renderer, mappedInput, [this] {
exitActivity();
updateRequired = true;
}));
xSemaphoreGive(renderingMutex);
} else if (strcmp(setting.name, "Check for updates") == 0) {
xSemaphoreTake(renderingMutex, portMAX_DELAY);
exitActivity();

View File

@ -0,0 +1,178 @@
#include "ClearCacheActivity.h"
#include <GfxRenderer.h>
#include <HardwareSerial.h>
#include <SDCardManager.h>
#include "MappedInputManager.h"
#include "fontIds.h"
void ClearCacheActivity::taskTrampoline(void* param) {
auto* self = static_cast<ClearCacheActivity*>(param);
self->displayTaskLoop();
}
void ClearCacheActivity::onEnter() {
ActivityWithSubactivity::onEnter();
renderingMutex = xSemaphoreCreateMutex();
state = WARNING;
updateRequired = true;
xTaskCreate(&ClearCacheActivity::taskTrampoline, "ClearCacheActivityTask",
4096, // Stack size
this, // Parameters
1, // Priority
&displayTaskHandle // Task handle
);
}
void ClearCacheActivity::onExit() {
ActivityWithSubactivity::onExit();
// Wait until not rendering to delete task to avoid killing mid-instruction to EPD
xSemaphoreTake(renderingMutex, portMAX_DELAY);
if (displayTaskHandle) {
vTaskDelete(displayTaskHandle);
displayTaskHandle = nullptr;
}
vSemaphoreDelete(renderingMutex);
renderingMutex = nullptr;
}
void ClearCacheActivity::displayTaskLoop() {
while (true) {
if (updateRequired) {
updateRequired = false;
xSemaphoreTake(renderingMutex, portMAX_DELAY);
render();
xSemaphoreGive(renderingMutex);
}
vTaskDelay(10 / portTICK_PERIOD_MS);
}
}
void ClearCacheActivity::render() {
const auto pageHeight = renderer.getScreenHeight();
renderer.clearScreen();
renderer.drawCenteredText(UI_12_FONT_ID, 15, "Clear Cache", true, EpdFontFamily::BOLD);
if (state == WARNING) {
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 - 60, "This will clear all cached book data.", true);
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 - 30, "All reading progress will be lost!", true,
EpdFontFamily::BOLD);
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 + 10, "Books will need to be re-indexed", true);
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 + 30, "when opened again.", true);
const auto labels = mappedInput.mapLabels("« Cancel", "Clear", "", "");
renderer.drawButtonHints(UI_10_FONT_ID, labels.btn1, labels.btn2, labels.btn3, labels.btn4);
renderer.displayBuffer();
return;
}
if (state == CLEARING) {
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2, "Clearing cache...", true, EpdFontFamily::BOLD);
renderer.displayBuffer();
return;
}
if (state == SUCCESS) {
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 - 20, "Cache Cleared", true, EpdFontFamily::BOLD);
String resultText = String(clearedCount) + " items removed";
if (failedCount > 0) {
resultText += ", " + String(failedCount) + " failed";
}
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 + 10, resultText.c_str());
const auto labels = mappedInput.mapLabels("« Back", "", "", "");
renderer.drawButtonHints(UI_10_FONT_ID, labels.btn1, labels.btn2, labels.btn3, labels.btn4);
renderer.displayBuffer();
return;
}
if (state == FAILED) {
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 - 20, "Failed to clear cache", true, EpdFontFamily::BOLD);
renderer.drawCenteredText(UI_10_FONT_ID, pageHeight / 2 + 10, "Check serial output for details");
const auto labels = mappedInput.mapLabels("« Back", "", "", "");
renderer.drawButtonHints(UI_10_FONT_ID, labels.btn1, labels.btn2, labels.btn3, labels.btn4);
renderer.displayBuffer();
return;
}
}
void ClearCacheActivity::clearCache() {
Serial.printf("[%lu] [CLEAR_CACHE] Clearing cache...\n", millis());
// Open .crosspoint directory
auto root = SdMan.open("/.crosspoint");
if (!root || !root.isDirectory()) {
Serial.printf("[%lu] [CLEAR_CACHE] Failed to open cache directory\n", millis());
if (root) root.close();
state = FAILED;
updateRequired = true;
return;
}
clearedCount = 0;
failedCount = 0;
char name[128];
// Iterate through all entries in the directory
for (auto file = root.openNextFile(); file; file = root.openNextFile()) {
file.getName(name, sizeof(name));
String itemName(name);
// Only delete directories starting with epub_ or xtc_
if (file.isDirectory() && (itemName.startsWith("epub_") || itemName.startsWith("xtc_"))) {
String fullPath = "/.crosspoint/" + itemName;
Serial.printf("[%lu] [CLEAR_CACHE] Removing cache: %s\n", millis(), fullPath.c_str());
file.close(); // Close before attempting to delete
if (SdMan.removeDir(fullPath.c_str())) {
clearedCount++;
} else {
Serial.printf("[%lu] [CLEAR_CACHE] Failed to remove: %s\n", millis(), fullPath.c_str());
failedCount++;
}
} else {
file.close();
}
}
root.close();
Serial.printf("[%lu] [CLEAR_CACHE] Cache cleared: %d removed, %d failed\n", millis(), clearedCount, failedCount);
state = SUCCESS;
updateRequired = true;
}
void ClearCacheActivity::loop() {
if (state == WARNING) {
if (mappedInput.wasPressed(MappedInputManager::Button::Confirm)) {
Serial.printf("[%lu] [CLEAR_CACHE] User confirmed, starting cache clear\n", millis());
xSemaphoreTake(renderingMutex, portMAX_DELAY);
state = CLEARING;
xSemaphoreGive(renderingMutex);
updateRequired = true;
vTaskDelay(10 / portTICK_PERIOD_MS);
clearCache();
}
if (mappedInput.wasPressed(MappedInputManager::Button::Back)) {
Serial.printf("[%lu] [CLEAR_CACHE] User cancelled\n", millis());
goBack();
}
return;
}
if (state == SUCCESS || state == FAILED) {
if (mappedInput.wasPressed(MappedInputManager::Button::Back)) {
goBack();
}
return;
}
}

View File

@ -0,0 +1,37 @@
#pragma once
#include <freertos/FreeRTOS.h>
#include <freertos/semphr.h>
#include <freertos/task.h>
#include <functional>
#include "activities/ActivityWithSubactivity.h"
class ClearCacheActivity final : public ActivityWithSubactivity {
public:
explicit ClearCacheActivity(GfxRenderer& renderer, MappedInputManager& mappedInput,
const std::function<void()>& goBack)
: ActivityWithSubactivity("ClearCache", renderer, mappedInput), goBack(goBack) {}
void onEnter() override;
void onExit() override;
void loop() override;
private:
enum State { WARNING, CLEARING, SUCCESS, FAILED };
State state = WARNING;
TaskHandle_t displayTaskHandle = nullptr;
SemaphoreHandle_t renderingMutex = nullptr;
bool updateRequired = false;
const std::function<void()> goBack;
int clearedCount = 0;
int failedCount = 0;
static void taskTrampoline(void* param);
[[noreturn]] void displayTaskLoop();
void render();
void clearCache();
};

View File

@ -44,11 +44,11 @@ const SettingInfo controlsSettings[controlsSettingsCount] = {
SettingInfo::Toggle("Long-press Chapter Skip", &CrossPointSettings::longPressChapterSkip),
SettingInfo::Enum("Short Power Button Click", &CrossPointSettings::shortPwrBtn, {"Ignore", "Sleep", "Page Turn"})};
constexpr int systemSettingsCount = 4;
constexpr int systemSettingsCount = 5;
const SettingInfo systemSettings[systemSettingsCount] = {
SettingInfo::Enum("Time to Sleep", &CrossPointSettings::sleepTimeout,
{"1 min", "5 min", "10 min", "15 min", "30 min"}),
SettingInfo::Action("KOReader Sync"), SettingInfo::Action("Calibre Settings"),
SettingInfo::Action("KOReader Sync"), SettingInfo::Action("Calibre Settings"), SettingInfo::Action("Clear Cache"),
SettingInfo::Action("Check for updates")};
} // namespace

View File

@ -1,6 +1,7 @@
#include "CrossPointWebServer.h"
#include <ArduinoJson.h>
#include <Epub.h>
#include <FsHelpers.h>
#include <SDCardManager.h>
#include <WiFi.h>
@ -10,6 +11,7 @@
#include "html/FilesPageHtml.generated.h"
#include "html/HomePageHtml.generated.h"
#include "util/StringUtils.h"
namespace {
// Folders/files to hide from the web interface file browser
@ -28,6 +30,15 @@ size_t wsUploadSize = 0;
size_t wsUploadReceived = 0;
unsigned long wsUploadStartTime = 0;
bool wsUploadInProgress = false;
// Helper function to clear epub cache after upload
void clearEpubCacheIfNeeded(const String& filePath) {
// Only clear cache for .epub files
if (StringUtils::checkFileExtension(filePath, ".epub")) {
Epub(filePath.c_str(), "/.crosspoint").clearCache();
Serial.printf("[%lu] [WEB] Cleared epub cache for: %s\n", millis(), filePath.c_str());
}
}
} // namespace
// File listing page template - now using generated headers:
@ -500,6 +511,12 @@ void CrossPointWebServer::handleUpload() const {
uploadFileName.c_str(), uploadSize, elapsed, avgKbps);
Serial.printf("[%lu] [WEB] [UPLOAD] Diagnostics: %d writes, total write time: %lu ms (%.1f%%)\n", millis(),
writeCount, totalWriteTime, writePercent);
// Clear epub cache to prevent stale metadata issues when overwriting files
String filePath = uploadPath;
if (!filePath.endsWith("/")) filePath += "/";
filePath += uploadFileName;
clearEpubCacheIfNeeded(filePath);
}
}
} else if (upload.status == UPLOAD_FILE_ABORTED) {
@ -787,6 +804,12 @@ void CrossPointWebServer::onWebSocketEvent(uint8_t num, WStype_t type, uint8_t*
Serial.printf("[%lu] [WS] Upload complete: %s (%d bytes in %lu ms, %.1f KB/s)\n", millis(),
wsUploadFileName.c_str(), wsUploadSize, elapsed, kbps);
// Clear epub cache to prevent stale metadata issues when overwriting files
String filePath = wsUploadPath;
if (!filePath.endsWith("/")) filePath += "/";
filePath += wsUploadFileName;
clearEpubCacheIfNeeded(filePath);
wsServer->sendTXT(num, "DONE");
lastProgressSent = 0;
}

View File

@ -2,6 +2,7 @@
#include <HTTPClient.h>
#include <HardwareSerial.h>
#include <StreamString.h>
#include <WiFiClient.h>
#include <WiFiClientSecure.h>
@ -9,7 +10,7 @@
#include "util/UrlUtils.h"
bool HttpDownloader::fetchUrl(const std::string& url, std::string& outContent) {
bool HttpDownloader::fetchUrl(const std::string& url, Stream& outContent) {
// Use WiFiClientSecure for HTTPS, regular WiFiClient for HTTP
std::unique_ptr<WiFiClient> client;
if (UrlUtils::isHttpsUrl(url)) {
@ -34,10 +35,20 @@ bool HttpDownloader::fetchUrl(const std::string& url, std::string& outContent) {
return false;
}
outContent = http.getString().c_str();
http.writeToStream(&outContent);
http.end();
Serial.printf("[%lu] [HTTP] Fetched %zu bytes\n", millis(), outContent.size());
Serial.printf("[%lu] [HTTP] Fetch success\n", millis());
return true;
}
bool HttpDownloader::fetchUrl(const std::string& url, std::string& outContent) {
StreamString stream;
if (!fetchUrl(url, stream)) {
return false;
}
outContent = stream.c_str();
return true;
}

View File

@ -27,6 +27,8 @@ class HttpDownloader {
*/
static bool fetchUrl(const std::string& url, std::string& outContent);
static bool fetchUrl(const std::string& url, Stream& stream);
/**
* Download a file to the SD card.
* @param url The URL to download

View File

@ -49,6 +49,18 @@ bool checkFileExtension(const std::string& fileName, const char* extension) {
return true;
}
bool checkFileExtension(const String& fileName, const char* extension) {
if (fileName.length() < strlen(extension)) {
return false;
}
String localFile(fileName);
String localExtension(extension);
localFile.toLowerCase();
localExtension.toLowerCase();
return localFile.endsWith(localExtension);
}
size_t utf8RemoveLastChar(std::string& str) {
if (str.empty()) return 0;
size_t pos = str.size() - 1;

View File

@ -1,5 +1,7 @@
#pragma once
#include <WString.h>
#include <string>
namespace StringUtils {
@ -15,6 +17,7 @@ std::string sanitizeFilename(const std::string& name, size_t maxLength = 100);
* Check if the given filename ends with the specified extension (case-insensitive).
*/
bool checkFileExtension(const std::string& fileName, const char* extension);
bool checkFileExtension(const String& fileName, const char* extension);
// UTF-8 safe string truncation - removes one character from the end
// Returns the new size after removing one UTF-8 character