mirror of
https://github.com/daveallie/crosspoint-reader.git
synced 2026-02-06 23:57:39 +03:00
Compare commits
No commits in common. "be233917ec8f24b5ad03ea80107fd878928c7e9f" and "f6767c857f85d90ad7017ba416b159bcb2d1907f" have entirely different histories.
be233917ec
...
f6767c857f
6
.github/workflows/ci.yml
vendored
6
.github/workflows/ci.yml
vendored
@ -12,6 +12,12 @@ jobs:
|
||||
- uses: actions/checkout@v6
|
||||
with:
|
||||
submodules: recursive
|
||||
- uses: actions/cache@v5
|
||||
with:
|
||||
path: |
|
||||
~/.cache/pip
|
||||
~/.platformio/.cache
|
||||
key: ${{ runner.os }}-pio
|
||||
- uses: actions/setup-python@v6
|
||||
with:
|
||||
python-version: '3.14'
|
||||
|
||||
41
README.md
41
README.md
@ -25,7 +25,7 @@ This project is **not affiliated with Xteink**; it's built as a community projec
|
||||
|
||||
## Features & Usage
|
||||
|
||||
- [x] EPUB parsing and rendering (EPUB 2 and EPUB 3)
|
||||
- [x] EPUB parsing and rendering
|
||||
- [ ] Image support within EPUB
|
||||
- [x] Saved reading position
|
||||
- [x] File explorer with file picker
|
||||
@ -33,13 +33,11 @@ This project is **not affiliated with Xteink**; it's built as a community projec
|
||||
- [x] Support nested folders
|
||||
- [ ] EPUB picker with cover art
|
||||
- [x] Custom sleep screen
|
||||
- [x] Cover sleep screen
|
||||
- [ ] Cover sleep screen
|
||||
- [x] Wifi book upload
|
||||
- [x] Wifi OTA updates
|
||||
- [x] Configurable font, layout, and display options
|
||||
- [ ] User provided fonts
|
||||
- [ ] Full UTF support
|
||||
- [x] Screen rotation
|
||||
- [ ] Wifi OTA updates
|
||||
- [ ] Configurable font, layout, and display options
|
||||
- [ ] Screen rotation
|
||||
|
||||
See [the user guide](./USER_GUIDE.md) for instructions on operating CrossPoint.
|
||||
|
||||
@ -100,9 +98,9 @@ CrossPoint Reader is pretty aggressive about caching data down to the SD card to
|
||||
has ~380KB of usable RAM, so we have to be careful. A lot of the decisions made in the design of the firmware were based
|
||||
on this constraint.
|
||||
|
||||
### Data caching
|
||||
### EPUB caching
|
||||
|
||||
The first time chapters of a book are loaded, they are cached to the SD card. Subsequent loads are served from the
|
||||
The first time chapters of an EPUB are loaded, they are cached to the SD card. Subsequent loads are served from the
|
||||
cache. This cache directory exists at `.crosspoint` on the SD card. The structure is as follows:
|
||||
|
||||
|
||||
@ -110,22 +108,25 @@ cache. This cache directory exists at `.crosspoint` on the SD card. The structur
|
||||
.crosspoint/
|
||||
├── epub_12471232/ # Each EPUB is cached to a subdirectory named `epub_<hash>`
|
||||
│ ├── progress.bin # Stores reading progress (chapter, page, etc.)
|
||||
│ ├── cover.bmp # Book cover image (once generated)
|
||||
│ ├── book.bin # Book metadata (title, author, spine, table of contents, etc.)
|
||||
│ └── sections/ # All chapter data is stored in the sections subdirectory
|
||||
│ ├── 0.bin # Chapter data (screen count, all text layout info, etc.)
|
||||
│ ├── 1.bin # files are named by their index in the spine
|
||||
│ └── ...
|
||||
│ ├── 0/ # Each chapter is stored in a subdirectory named by its index (based on the spine order)
|
||||
│ │ ├── section.bin # Section metadata (page count)
|
||||
│ │ ├── page_0.bin # Each page is stored in a separate file, it
|
||||
│ │ ├── page_1.bin # contains the position (x, y) and text for each word
|
||||
│ │ └── ...
|
||||
│ ├── 1/
|
||||
│ │ ├── section.bin
|
||||
│ │ ├── page_0.bin
|
||||
│ │ ├── page_1.bin
|
||||
│ │ └── ...
|
||||
│ └── ...
|
||||
│
|
||||
└── epub_189013891/
|
||||
```
|
||||
|
||||
Deleting the `.crosspoint` directory will clear the entire cache.
|
||||
Deleting the `.crosspoint` directory will clear the cache.
|
||||
|
||||
Due the way it's currently implemented, the cache is not automatically cleared when a book is deleted and moving a book
|
||||
file will use a new cache directory, resetting the reading progress.
|
||||
|
||||
For more details on the internal file structures, see the [file formats document](./docs/file-formats.md).
|
||||
Due the way it's currently implemented, the cache is not automatically cleared when the EPUB is deleted and moving an
|
||||
EPUB file will reset the reading progress.
|
||||
|
||||
## Contributing
|
||||
|
||||
|
||||
@ -5,7 +5,7 @@ the device.
|
||||
|
||||
## 1. Hardware Overview
|
||||
|
||||
The device utilises the standard buttons on the Xtink X4 (in the same layout as the manufacturer firmware, by default):
|
||||
The device utilises the standard buttons on the Xtink X4 in the same layout:
|
||||
|
||||
### Button Layout
|
||||
| Location | Buttons |
|
||||
@ -13,23 +13,20 @@ The device utilises the standard buttons on the Xtink X4 (in the same layout as
|
||||
| **Bottom Edge** | **Back**, **Confirm**, **Left**, **Right** |
|
||||
| **Right Side** | **Power**, **Volume Up**, **Volume Down** |
|
||||
|
||||
Button layout can be customized in **[Settings](#35-settings)**.
|
||||
|
||||
---
|
||||
|
||||
## 2. Power & Startup
|
||||
|
||||
### Power On / Off
|
||||
|
||||
To turn the device on or off, **press and hold the Power button for half a second**. In **[Settings](#35-settings)** you can configure
|
||||
To turn the device on or off, **press and hold the Power button for half a second**. In **Settings** you can configure
|
||||
the power button to trigger on a short press instead of a long one.
|
||||
|
||||
### First Launch
|
||||
|
||||
Upon turning the device on for the first time, you will be placed on the **[Home](#31-home-screen)** screen.
|
||||
Upon turning the device on for the first time, you will be placed on the **Home** screen.
|
||||
|
||||
> [!NOTE]
|
||||
> On subsequent restarts, the firmware will automatically reopen the last book you were reading.
|
||||
> **Note:** On subsequent restarts, the firmware will automatically reopen the last book you were reading.
|
||||
|
||||
---
|
||||
|
||||
@ -37,10 +34,10 @@ Upon turning the device on for the first time, you will be placed on the **[Home
|
||||
|
||||
### 3.1 Home Screen
|
||||
|
||||
The Home Screen is the main entry point to the firmware. From here you can navigate to **[Reading Mode](#4-reading-mode)** with the most recently read book, **[Book Selection](#32-book-selection)**,
|
||||
**[Settings](#35-settings)**, or the **[File Upload](#34-file-upload-screen)** screen.
|
||||
The Home Screen is the main entry point to the firmware. From here you can navigate to the **Book Selection** screen,
|
||||
**Settings** screen, or **File Upload** screen.
|
||||
|
||||
### 3.2 Book Selection
|
||||
### 3.2 Book Selection (Read)
|
||||
|
||||
The Book Selection acts as a folder and file browser.
|
||||
|
||||
@ -48,13 +45,13 @@ The Book Selection acts as a folder and file browser.
|
||||
and down through folders and books.
|
||||
* **Open Selection:** Press **Confirm** to open a folder or read a selected book.
|
||||
|
||||
### 3.3 Reading Mode
|
||||
### 3.3 Reading Screen
|
||||
|
||||
See [Reading Mode](#4-reading-mode) below for more information.
|
||||
See [4. Reading Mode](#4-reading-mode) below for more information.
|
||||
|
||||
### 3.4 File Upload Screen
|
||||
|
||||
The File Upload screen allows you to upload new e-books to the device. When you enter the screen, you'll be prompted with
|
||||
The File Upload screen allows you to upload new e-books to the device. When you enter the screen you'll be prompted with
|
||||
a WiFi selection dialog and then your X4 will start hosting a web server.
|
||||
|
||||
See the [webserver docs](./docs/webserver.md) for more information on how to connect to the web server and upload files.
|
||||
@ -65,32 +62,11 @@ The Settings screen allows you to configure the device's behavior. There are a f
|
||||
- **Sleep Screen**: Which sleep screen to display when the device sleeps, options are:
|
||||
- "Dark" (default) - The default dark sleep screen
|
||||
- "Light" - The same default sleep screen, on a white background
|
||||
- "Custom" - Custom images from the SD card, see [Sleep Screen](#36-sleep-screen) below for more information
|
||||
- "Custom" - Custom images from the SD card, see [3.6 Sleep Screen](#36-sleep-screen) below for more information
|
||||
- "Cover" - The book cover image (Note: this is experimental and may not work as expected)
|
||||
- **Status Bar**: Configure the status bar displayed while reading, options are:
|
||||
- "None" - No status bar
|
||||
- "No Progress" - Show status bar without reading progress
|
||||
- "Full" - Show status bar with reading progress
|
||||
- **Extra Paragraph Spacing**: If enabled, vertical space will be added between paragraphs in the book, if disabled,
|
||||
paragraphs will not have vertical space between them, but will have first word indentation.
|
||||
- **Short Power Button Click**: Whether to trigger the power button on a short press or a long press.
|
||||
- **Reading Orientation**: Set the screen orientation for reading, options are:
|
||||
- "Portrait" (default) - Standard portrait orientation
|
||||
- "Landscape CW" - Landscape, rotated clockwise
|
||||
- "Inverted" - Portrait, upside down
|
||||
- "Landscape CCW" - Landscape, rotated counter-clockwise
|
||||
- **Front Button Layout**: Configure the order of the bottom edge buttons, options are:
|
||||
- "Bck, Cnfrm, Lft, Rght" (default) - Back, Confirm, Left, Right
|
||||
- "Lft, Rght, Bck, Cnfrm" - Left, Right, Back, Confirm
|
||||
- "Lft, Bck, Cnfrm, Rght" - Left, Back, Confirm, Right
|
||||
- **Side Button Layout**: Swap the order of the volume buttons from Previous/Next to Next/Previous. This change is only in effect when reading.
|
||||
- **Reader Font Family**: Choose the font used for reading, options are:
|
||||
- "Bookerly" (default) - Amazon's reading font
|
||||
- "Noto Sans" - Google's sans-serif font
|
||||
- "Open Dyslexic" - Font designed for readers with dyslexia
|
||||
- **Reader Font Size**: Adjust the text size for reading, options are "Small", "Medium", "Large", or "X Large".
|
||||
- **Reader Line Spacing**: Adjust the spacing between lines, options are "Tight", "Normal", or "Wide".
|
||||
- **Check for updates**: Check for firmware updates over WiFi.
|
||||
|
||||
### 3.6 Sleep Screen
|
||||
|
||||
@ -98,7 +74,7 @@ You can customize the sleep screen by placing custom images in specific location
|
||||
|
||||
- **Single Image:** Place a file named `sleep.bmp` in the root directory.
|
||||
- **Multiple Images:** Create a `sleep` directory in the root of the SD card and place any number of `.bmp` images
|
||||
inside. If images are found in this directory, they will take priority over the `sleep.bmp` file, and one will be
|
||||
inside. If images are found in this directory, they will take priority over the `sleep.png` file, and one will be
|
||||
randomly selected each time the device sleeps.
|
||||
|
||||
> [!NOTE]
|
||||
@ -126,9 +102,8 @@ Once you have opened a book, the button layout changes to facilitate reading.
|
||||
* **Previous Chapter:** Press and **hold** the **Left** (or **Volume Up**) button briefly, then release.
|
||||
|
||||
### System Navigation
|
||||
* **Return to Book Selection:** Press **Back** to close the book and return to the **[Book Selection](#32-book-selection)** screen.
|
||||
* **Return to Home:** Press and hold **Back** to close the book and return to the **[Home](#31-home-screen)** screen.
|
||||
* **Chapter Menu:** Press **Confirm** to open the **[Table of Contents/Chapter Selection](#5-chapter-selection-screen)**.
|
||||
* **Return to Home:** Press **Back** to close the book and return to the Book Selection screen.
|
||||
* **Chapter Menu:** Press **Confirm** to open the Table of Contents/Chapter Selection screen.
|
||||
|
||||
---
|
||||
|
||||
@ -148,3 +123,5 @@ Please note that this firmware is currently in active development. The following
|
||||
are planned for future updates:
|
||||
|
||||
* **Images:** Embedded images in e-books will not render.
|
||||
* **Text Formatting:** There are currently no settings to adjust font type, size, line spacing, or margins.
|
||||
* **Rotation**: Different rotation options are not supported.
|
||||
|
||||
@ -1,19 +1,3 @@
|
||||
#!/bin/bash
|
||||
|
||||
GIT_LS_FILES_FLAGS=""
|
||||
if [[ "$1" == "-g" ]]; then
|
||||
GIT_LS_FILES_FLAGS="--modified"
|
||||
fi
|
||||
|
||||
# --- Main Logic ---
|
||||
|
||||
# Format all files (or only modified files if -g is passed)
|
||||
|
||||
# Use 'git ls-files' to get a list of all files tracked by git:
|
||||
# --modified: files tracked by git that have been modified (staged or unstaged)
|
||||
# --exclude-standard: ignores files in .gitignore
|
||||
# Additionally exclude files in 'lib/EpdFont/builtinFonts/' as they are script-generated.
|
||||
git ls-files --exclude-standard ${GIT_LS_FILES_FLAGS} \
|
||||
| grep -E '\.(c|cpp|h|hpp)$' \
|
||||
| grep -v -E '^lib/EpdFont/builtinFonts/' \
|
||||
| xargs -r clang-format -style=file -i
|
||||
find src lib \( -name "*.c" -o -name "*.cpp" -o -name "*.h" -o -name "*.hpp" \) -exec clang-format -style=file -i {} +
|
||||
|
||||
@ -1,221 +0,0 @@
|
||||
# File Formats
|
||||
|
||||
## `book.bin`
|
||||
|
||||
### Version 3
|
||||
|
||||
ImHex Pattern:
|
||||
|
||||
```c++
|
||||
import std.mem;
|
||||
import std.string;
|
||||
import std.core;
|
||||
|
||||
// === Configuration ===
|
||||
#define EXPECTED_VERSION 3
|
||||
#define MAX_STRING_LENGTH 65535
|
||||
|
||||
// === String Structure ===
|
||||
|
||||
struct String {
|
||||
u32 length [[hidden, comment("String byte length")]];
|
||||
if (length > MAX_STRING_LENGTH) {
|
||||
std::warning(std::format("Unusually large string length: {} bytes", length));
|
||||
}
|
||||
char data[length] [[comment("UTF-8 string data")]];
|
||||
} [[sealed, format("format_string"), comment("Length-prefixed UTF-8 string")]];
|
||||
|
||||
fn format_string(String s) {
|
||||
return s.data;
|
||||
};
|
||||
|
||||
// === Metadata Structure ===
|
||||
|
||||
struct Metadata {
|
||||
String title [[comment("Book title")]];
|
||||
String author [[comment("Book author")]];
|
||||
String coverItemHref [[comment("Path to cover image")]];
|
||||
String textReferenceHref [[comment("Path to guided first text reference")]];
|
||||
} [[comment("Book metadata information")]];
|
||||
|
||||
// === Spine Entry Structure ===
|
||||
|
||||
struct SpineEntry {
|
||||
String href [[comment("Resource path")]];
|
||||
u32 cumulativeSize [[comment("Cumulative size in bytes"), color("FF6B6B")]];
|
||||
s16 tocIndex [[comment("Index into TOC (-1 if none)"), color("4ECDC4")]];
|
||||
} [[comment("Spine entry defining reading order")]];
|
||||
|
||||
// === TOC Entry Structure ===
|
||||
|
||||
struct TocEntry {
|
||||
String title [[comment("Chapter/section title")]];
|
||||
String href [[comment("Resource path")]];
|
||||
String anchor [[comment("Fragment identifier")]];
|
||||
u8 level [[comment("Nesting level (0-255)"), color("95E1D3")]];
|
||||
s16 spineIndex [[comment("Index into spine (-1 if none)"), color("F38181")]];
|
||||
} [[comment("Table of contents entry")]];
|
||||
|
||||
// === Book Bin Structure ===
|
||||
|
||||
struct BookBin {
|
||||
// Header
|
||||
u8 version [[comment("Format version"), color("FFD93D")]];
|
||||
|
||||
// Version validation
|
||||
if (version != EXPECTED_VERSION) {
|
||||
std::error(std::format("Unsupported version: {} (expected {})", version, EXPECTED_VERSION));
|
||||
}
|
||||
|
||||
u32 lutOffset [[comment("Offset to lookup tables"), color("6BCB77")]];
|
||||
u16 spineCount [[comment("Number of spine entries"), color("4D96FF")]];
|
||||
u16 tocCount [[comment("Number of TOC entries"), color("FF6B9D")]];
|
||||
|
||||
// Metadata section
|
||||
Metadata metadata [[comment("Book metadata")]];
|
||||
|
||||
// Validate LUT offset alignment
|
||||
u32 currentOffset = $;
|
||||
if (currentOffset != lutOffset) {
|
||||
std::warning(std::format("LUT offset mismatch: expected 0x{:X}, got 0x{:X}", lutOffset, currentOffset));
|
||||
}
|
||||
|
||||
// Lookup Tables
|
||||
u32 spineLut[spineCount] [[comment("Spine entry offsets"), color("4D96FF")]];
|
||||
u32 tocLut[tocCount] [[comment("TOC entry offsets"), color("FF6B9D")]];
|
||||
|
||||
// Data Entries
|
||||
SpineEntry spines[spineCount] [[comment("Spine entries (reading order)")]];
|
||||
TocEntry toc[tocCount] [[comment("Table of contents entries")]];
|
||||
};
|
||||
|
||||
// === File Parsing ===
|
||||
|
||||
BookBin book @ 0x00;
|
||||
|
||||
// Validate we've consumed the entire file
|
||||
u32 fileSize = std::mem::size();
|
||||
u32 parsedSize = $;
|
||||
|
||||
if (parsedSize != fileSize) {
|
||||
std::warning(std::format("Unparsed data detected: {} bytes remaining at offset 0x{:X}", fileSize - parsedSize, parsedSize));
|
||||
}
|
||||
```
|
||||
|
||||
## `section.bin`
|
||||
|
||||
### Version 8
|
||||
|
||||
ImHex Pattern:
|
||||
|
||||
```c++
|
||||
import std.mem;
|
||||
import std.string;
|
||||
import std.core;
|
||||
|
||||
// === Configuration ===
|
||||
#define EXPECTED_VERSION 8
|
||||
#define MAX_STRING_LENGTH 65535
|
||||
|
||||
// === String Structure ===
|
||||
|
||||
struct String {
|
||||
u32 length [[hidden, comment("String byte length")]];
|
||||
if (length > MAX_STRING_LENGTH) {
|
||||
std::warning(std::format("Unusually large string length: {} bytes", length));
|
||||
}
|
||||
char data[length] [[comment("UTF-8 string data")]];
|
||||
} [[sealed, format("format_string"), comment("Length-prefixed UTF-8 string")]];
|
||||
|
||||
fn format_string(String s) {
|
||||
return s.data;
|
||||
};
|
||||
|
||||
// === Page Structure ===
|
||||
|
||||
enum StorageType : u8 {
|
||||
PageLine = 1
|
||||
};
|
||||
|
||||
enum WordStyle : u8 {
|
||||
REGULAR = 0,
|
||||
BOLD = 1,
|
||||
ITALIC = 2,
|
||||
BOLD_ITALIC = 3
|
||||
};
|
||||
|
||||
enum BlockStyle : u8 {
|
||||
JUSTIFIED = 0,
|
||||
LEFT_ALIGN = 1,
|
||||
CENTER_ALIGN = 2,
|
||||
RIGHT_ALIGN = 3,
|
||||
};
|
||||
|
||||
struct PageLine {
|
||||
s16 xPos;
|
||||
s16 yPos;
|
||||
u16 wordCount;
|
||||
String words[wordCount];
|
||||
u16 wordXPos[wordCount];
|
||||
WordStyle wordStyle[wordCount];
|
||||
BlockStyle blockStyle;
|
||||
};
|
||||
|
||||
struct PageElement {
|
||||
u8 pageElementType;
|
||||
if (pageElementType == 1) {
|
||||
PageLine pageLine [[inline]];
|
||||
} else {
|
||||
std::error(std::format("Unknown page element type: {}", pageElementType));
|
||||
}
|
||||
};
|
||||
|
||||
struct Page {
|
||||
u16 elementCount;
|
||||
PageElement elements[elementCount] [[inline]];
|
||||
};
|
||||
|
||||
// === Section Bin Structure ===
|
||||
|
||||
struct SectionBin {
|
||||
// Header
|
||||
u8 version [[comment("Format version"), color("FFD93D")]];
|
||||
|
||||
// Version validation
|
||||
if (version != EXPECTED_VERSION) {
|
||||
std::error(std::format("Unsupported version: {} (expected {})", version, EXPECTED_VERSION));
|
||||
}
|
||||
|
||||
// Cache busting parameters
|
||||
s32 fontId;
|
||||
float lineCompression;
|
||||
bool extraParagraphSpacing;
|
||||
u16 viewportWidth;
|
||||
u16 vieportHeight;
|
||||
u16 pageCount;
|
||||
u32 lutOffset;
|
||||
|
||||
Page page[pageCount];
|
||||
|
||||
// Validate LUT offset alignment
|
||||
u32 currentOffset = $;
|
||||
if (currentOffset != lutOffset) {
|
||||
std::warning(std::format("LUT offset mismatch: expected 0x{:X}, got 0x{:X}", lutOffset, currentOffset));
|
||||
}
|
||||
|
||||
// Lookup Tables
|
||||
u32 lut[pageCount];
|
||||
};
|
||||
|
||||
// === File Parsing ===
|
||||
|
||||
SectionBin book @ 0x00;
|
||||
|
||||
// Validate we've consumed the entire file
|
||||
u32 fileSize = std::mem::size();
|
||||
u32 parsedSize = $;
|
||||
|
||||
if (parsedSize != fileSize) {
|
||||
std::warning(std::format("Unparsed data detected: {} bytes remaining at offset 0x{:X}", fileSize - parsedSize, parsedSize));
|
||||
}
|
||||
```
|
||||
@ -2,7 +2,8 @@
|
||||
|
||||
#include <Utf8.h>
|
||||
|
||||
#include <algorithm>
|
||||
inline int min(const int a, const int b) { return a < b ? a : b; }
|
||||
inline int max(const int a, const int b) { return a < b ? b : a; }
|
||||
|
||||
void EpdFont::getTextBounds(const char* string, const int startX, const int startY, int* minX, int* minY, int* maxX,
|
||||
int* maxY) const {
|
||||
@ -31,10 +32,10 @@ void EpdFont::getTextBounds(const char* string, const int startX, const int star
|
||||
continue;
|
||||
}
|
||||
|
||||
*minX = std::min(*minX, cursorX + glyph->left);
|
||||
*maxX = std::max(*maxX, cursorX + glyph->left + glyph->width);
|
||||
*minY = std::min(*minY, cursorY + glyph->top - glyph->height);
|
||||
*maxY = std::max(*maxY, cursorY + glyph->top);
|
||||
*minX = min(*minX, cursorX + glyph->left);
|
||||
*maxX = max(*maxX, cursorX + glyph->left + glyph->width);
|
||||
*minY = min(*minY, cursorY + glyph->top - glyph->height);
|
||||
*maxY = max(*maxY, cursorY + glyph->top);
|
||||
cursorX += glyph->advanceX;
|
||||
}
|
||||
}
|
||||
@ -58,28 +59,14 @@ bool EpdFont::hasPrintableChars(const char* string) const {
|
||||
|
||||
const EpdGlyph* EpdFont::getGlyph(const uint32_t cp) const {
|
||||
const EpdUnicodeInterval* intervals = data->intervals;
|
||||
const int count = data->intervalCount;
|
||||
|
||||
if (count == 0) return nullptr;
|
||||
|
||||
// Binary search for O(log n) lookup instead of O(n)
|
||||
// Critical for Korean fonts with many unicode intervals
|
||||
int left = 0;
|
||||
int right = count - 1;
|
||||
|
||||
while (left <= right) {
|
||||
const int mid = left + (right - left) / 2;
|
||||
const EpdUnicodeInterval* interval = &intervals[mid];
|
||||
|
||||
if (cp < interval->first) {
|
||||
right = mid - 1;
|
||||
} else if (cp > interval->last) {
|
||||
left = mid + 1;
|
||||
} else {
|
||||
// Found: cp >= interval->first && cp <= interval->last
|
||||
for (int i = 0; i < data->intervalCount; i++) {
|
||||
const EpdUnicodeInterval* interval = &intervals[i];
|
||||
if (cp >= interval->first && cp <= interval->last) {
|
||||
return &data->glyph[interval->offset + (cp - interval->first)];
|
||||
}
|
||||
if (cp < interval->first) {
|
||||
return nullptr;
|
||||
}
|
||||
}
|
||||
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
#include "EpdFontFamily.h"
|
||||
|
||||
const EpdFont* EpdFontFamily::getFont(const Style style) const {
|
||||
const EpdFont* EpdFontFamily::getFont(const EpdFontStyle style) const {
|
||||
if (style == BOLD && bold) {
|
||||
return bold;
|
||||
}
|
||||
@ -22,16 +22,16 @@ const EpdFont* EpdFontFamily::getFont(const Style style) const {
|
||||
return regular;
|
||||
}
|
||||
|
||||
void EpdFontFamily::getTextDimensions(const char* string, int* w, int* h, const Style style) const {
|
||||
void EpdFontFamily::getTextDimensions(const char* string, int* w, int* h, const EpdFontStyle style) const {
|
||||
getFont(style)->getTextDimensions(string, w, h);
|
||||
}
|
||||
|
||||
bool EpdFontFamily::hasPrintableChars(const char* string, const Style style) const {
|
||||
bool EpdFontFamily::hasPrintableChars(const char* string, const EpdFontStyle style) const {
|
||||
return getFont(style)->hasPrintableChars(string);
|
||||
}
|
||||
|
||||
const EpdFontData* EpdFontFamily::getData(const Style style) const { return getFont(style)->data; }
|
||||
const EpdFontData* EpdFontFamily::getData(const EpdFontStyle style) const { return getFont(style)->data; }
|
||||
|
||||
const EpdGlyph* EpdFontFamily::getGlyph(const uint32_t cp, const Style style) const {
|
||||
const EpdGlyph* EpdFontFamily::getGlyph(const uint32_t cp, const EpdFontStyle style) const {
|
||||
return getFont(style)->getGlyph(cp);
|
||||
};
|
||||
|
||||
@ -1,24 +1,24 @@
|
||||
#pragma once
|
||||
#include "EpdFont.h"
|
||||
|
||||
enum EpdFontStyle { REGULAR, BOLD, ITALIC, BOLD_ITALIC };
|
||||
|
||||
class EpdFontFamily {
|
||||
public:
|
||||
enum Style : uint8_t { REGULAR = 0, BOLD = 1, ITALIC = 2, BOLD_ITALIC = 3 };
|
||||
|
||||
explicit EpdFontFamily(const EpdFont* regular, const EpdFont* bold = nullptr, const EpdFont* italic = nullptr,
|
||||
const EpdFont* boldItalic = nullptr)
|
||||
: regular(regular), bold(bold), italic(italic), boldItalic(boldItalic) {}
|
||||
~EpdFontFamily() = default;
|
||||
void getTextDimensions(const char* string, int* w, int* h, Style style = REGULAR) const;
|
||||
bool hasPrintableChars(const char* string, Style style = REGULAR) const;
|
||||
const EpdFontData* getData(Style style = REGULAR) const;
|
||||
const EpdGlyph* getGlyph(uint32_t cp, Style style = REGULAR) const;
|
||||
|
||||
private:
|
||||
const EpdFont* regular;
|
||||
const EpdFont* bold;
|
||||
const EpdFont* italic;
|
||||
const EpdFont* boldItalic;
|
||||
|
||||
const EpdFont* getFont(Style style) const;
|
||||
const EpdFont* getFont(EpdFontStyle style) const;
|
||||
|
||||
public:
|
||||
explicit EpdFontFamily(const EpdFont* regular, const EpdFont* bold = nullptr, const EpdFont* italic = nullptr,
|
||||
const EpdFont* boldItalic = nullptr)
|
||||
: regular(regular), bold(bold), italic(italic), boldItalic(boldItalic) {}
|
||||
~EpdFontFamily() = default;
|
||||
void getTextDimensions(const char* string, int* w, int* h, EpdFontStyle style = REGULAR) const;
|
||||
bool hasPrintableChars(const char* string, EpdFontStyle style = REGULAR) const;
|
||||
|
||||
const EpdFontData* getData(EpdFontStyle style = REGULAR) const;
|
||||
const EpdGlyph* getGlyph(uint32_t cp, EpdFontStyle style = REGULAR) const;
|
||||
};
|
||||
|
||||
@ -1,55 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <builtinFonts/bookerly_12_bold.h>
|
||||
#include <builtinFonts/bookerly_12_bolditalic.h>
|
||||
#include <builtinFonts/bookerly_12_italic.h>
|
||||
#include <builtinFonts/bookerly_12_regular.h>
|
||||
#include <builtinFonts/bookerly_14_bold.h>
|
||||
#include <builtinFonts/bookerly_14_bolditalic.h>
|
||||
#include <builtinFonts/bookerly_14_italic.h>
|
||||
#include <builtinFonts/bookerly_14_regular.h>
|
||||
#include <builtinFonts/bookerly_16_bold.h>
|
||||
#include <builtinFonts/bookerly_16_bolditalic.h>
|
||||
#include <builtinFonts/bookerly_16_italic.h>
|
||||
#include <builtinFonts/bookerly_16_regular.h>
|
||||
#include <builtinFonts/bookerly_18_bold.h>
|
||||
#include <builtinFonts/bookerly_18_bolditalic.h>
|
||||
#include <builtinFonts/bookerly_18_italic.h>
|
||||
#include <builtinFonts/bookerly_18_regular.h>
|
||||
#include <builtinFonts/notosans_8_regular.h>
|
||||
#include <builtinFonts/notosans_12_bold.h>
|
||||
#include <builtinFonts/notosans_12_bolditalic.h>
|
||||
#include <builtinFonts/notosans_12_italic.h>
|
||||
#include <builtinFonts/notosans_12_regular.h>
|
||||
#include <builtinFonts/notosans_14_bold.h>
|
||||
#include <builtinFonts/notosans_14_bolditalic.h>
|
||||
#include <builtinFonts/notosans_14_italic.h>
|
||||
#include <builtinFonts/notosans_14_regular.h>
|
||||
#include <builtinFonts/notosans_16_bold.h>
|
||||
#include <builtinFonts/notosans_16_bolditalic.h>
|
||||
#include <builtinFonts/notosans_16_italic.h>
|
||||
#include <builtinFonts/notosans_16_regular.h>
|
||||
#include <builtinFonts/notosans_18_bold.h>
|
||||
#include <builtinFonts/notosans_18_bolditalic.h>
|
||||
#include <builtinFonts/notosans_18_italic.h>
|
||||
#include <builtinFonts/notosans_18_regular.h>
|
||||
#include <builtinFonts/opendyslexic_10_bold.h>
|
||||
#include <builtinFonts/opendyslexic_10_bolditalic.h>
|
||||
#include <builtinFonts/opendyslexic_10_italic.h>
|
||||
#include <builtinFonts/opendyslexic_10_regular.h>
|
||||
#include <builtinFonts/opendyslexic_12_bold.h>
|
||||
#include <builtinFonts/opendyslexic_12_bolditalic.h>
|
||||
#include <builtinFonts/opendyslexic_12_italic.h>
|
||||
#include <builtinFonts/opendyslexic_12_regular.h>
|
||||
#include <builtinFonts/opendyslexic_14_bold.h>
|
||||
#include <builtinFonts/opendyslexic_14_bolditalic.h>
|
||||
#include <builtinFonts/opendyslexic_14_italic.h>
|
||||
#include <builtinFonts/opendyslexic_14_regular.h>
|
||||
#include <builtinFonts/opendyslexic_8_bold.h>
|
||||
#include <builtinFonts/opendyslexic_8_bolditalic.h>
|
||||
#include <builtinFonts/opendyslexic_8_italic.h>
|
||||
#include <builtinFonts/opendyslexic_8_regular.h>
|
||||
#include <builtinFonts/ubuntu_10_bold.h>
|
||||
#include <builtinFonts/ubuntu_10_regular.h>
|
||||
#include <builtinFonts/ubuntu_12_bold.h>
|
||||
#include <builtinFonts/ubuntu_12_regular.h>
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
4063
lib/EpdFont/builtinFonts/bookerly_2b.h
Normal file
4063
lib/EpdFont/builtinFonts/bookerly_2b.h
Normal file
File diff suppressed because it is too large
Load Diff
4292
lib/EpdFont/builtinFonts/bookerly_bold_2b.h
Normal file
4292
lib/EpdFont/builtinFonts/bookerly_bold_2b.h
Normal file
File diff suppressed because it is too large
Load Diff
4457
lib/EpdFont/builtinFonts/bookerly_bold_italic_2b.h
Normal file
4457
lib/EpdFont/builtinFonts/bookerly_bold_italic_2b.h
Normal file
File diff suppressed because it is too large
Load Diff
4178
lib/EpdFont/builtinFonts/bookerly_italic_2b.h
Normal file
4178
lib/EpdFont/builtinFonts/bookerly_italic_2b.h
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
1041
lib/EpdFont/builtinFonts/pixelarial14.h
Normal file
1041
lib/EpdFont/builtinFonts/pixelarial14.h
Normal file
File diff suppressed because it is too large
Load Diff
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -1,93 +0,0 @@
|
||||
Copyright 2022 The Noto Project Authors (https://github.com/notofonts/latin-greek-cyrillic)
|
||||
|
||||
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||
This license is copied below, and is also available with a FAQ at:
|
||||
https://openfontlicense.org
|
||||
|
||||
|
||||
-----------------------------------------------------------
|
||||
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||
-----------------------------------------------------------
|
||||
|
||||
PREAMBLE
|
||||
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||
development of collaborative font projects, to support the font creation
|
||||
efforts of academic and linguistic communities, and to provide a free and
|
||||
open framework in which fonts may be shared and improved in partnership
|
||||
with others.
|
||||
|
||||
The OFL allows the licensed fonts to be used, studied, modified and
|
||||
redistributed freely as long as they are not sold by themselves. The
|
||||
fonts, including any derivative works, can be bundled, embedded,
|
||||
redistributed and/or sold with any software provided that any reserved
|
||||
names are not used by derivative works. The fonts and derivatives,
|
||||
however, cannot be released under any other type of license. The
|
||||
requirement for fonts to remain under this license does not apply
|
||||
to any document created using the fonts or their derivatives.
|
||||
|
||||
DEFINITIONS
|
||||
"Font Software" refers to the set of files released by the Copyright
|
||||
Holder(s) under this license and clearly marked as such. This may
|
||||
include source files, build scripts and documentation.
|
||||
|
||||
"Reserved Font Name" refers to any names specified as such after the
|
||||
copyright statement(s).
|
||||
|
||||
"Original Version" refers to the collection of Font Software components as
|
||||
distributed by the Copyright Holder(s).
|
||||
|
||||
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||
or substituting -- in part or in whole -- any of the components of the
|
||||
Original Version, by changing formats or by porting the Font Software to a
|
||||
new environment.
|
||||
|
||||
"Author" refers to any designer, engineer, programmer, technical
|
||||
writer or other person who contributed to the Font Software.
|
||||
|
||||
PERMISSION & CONDITIONS
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||
redistribute, and sell modified and unmodified copies of the Font
|
||||
Software, subject to the following conditions:
|
||||
|
||||
1) Neither the Font Software nor any of its individual components,
|
||||
in Original or Modified Versions, may be sold by itself.
|
||||
|
||||
2) Original or Modified Versions of the Font Software may be bundled,
|
||||
redistributed and/or sold with any software, provided that each copy
|
||||
contains the above copyright notice and this license. These can be
|
||||
included either as stand-alone text files, human-readable headers or
|
||||
in the appropriate machine-readable metadata fields within text or
|
||||
binary files as long as those fields can be easily viewed by the user.
|
||||
|
||||
3) No Modified Version of the Font Software may use the Reserved Font
|
||||
Name(s) unless explicit written permission is granted by the corresponding
|
||||
Copyright Holder. This restriction only applies to the primary font name as
|
||||
presented to the users.
|
||||
|
||||
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||
Software shall not be used to promote, endorse or advertise any
|
||||
Modified Version, except to acknowledge the contribution(s) of the
|
||||
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||
permission.
|
||||
|
||||
5) The Font Software, modified or unmodified, in part or in whole,
|
||||
must be distributed entirely under this license, and must not be
|
||||
distributed under any other license. The requirement for fonts to
|
||||
remain under this license does not apply to any document created
|
||||
using the Font Software.
|
||||
|
||||
TERMINATION
|
||||
This license becomes null and void if any of the above conditions are
|
||||
not met.
|
||||
|
||||
DISCLAIMER
|
||||
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||
@ -1,94 +0,0 @@
|
||||
Copyright (c) 2019-07-29, Abbie Gonzalez (https://abbiecod.es|support@abbiecod.es),
|
||||
with Reserved Font Name OpenDyslexic.
|
||||
Copyright (c) 12/2012 - 2019
|
||||
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||
This license is copied below, and is also available with a FAQ at:
|
||||
http://scripts.sil.org/OFL
|
||||
|
||||
|
||||
-----------------------------------------------------------
|
||||
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||
-----------------------------------------------------------
|
||||
|
||||
PREAMBLE
|
||||
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||
development of collaborative font projects, to support the font creation
|
||||
efforts of academic and linguistic communities, and to provide a free and
|
||||
open framework in which fonts may be shared and improved in partnership
|
||||
with others.
|
||||
|
||||
The OFL allows the licensed fonts to be used, studied, modified and
|
||||
redistributed freely as long as they are not sold by themselves. The
|
||||
fonts, including any derivative works, can be bundled, embedded,
|
||||
redistributed and/or sold with any software provided that any reserved
|
||||
names are not used by derivative works. The fonts and derivatives,
|
||||
however, cannot be released under any other type of license. The
|
||||
requirement for fonts to remain under this license does not apply
|
||||
to any document created using the fonts or their derivatives.
|
||||
|
||||
DEFINITIONS
|
||||
"Font Software" refers to the set of files released by the Copyright
|
||||
Holder(s) under this license and clearly marked as such. This may
|
||||
include source files, build scripts and documentation.
|
||||
|
||||
"Reserved Font Name" refers to any names specified as such after the
|
||||
copyright statement(s).
|
||||
|
||||
"Original Version" refers to the collection of Font Software components as
|
||||
distributed by the Copyright Holder(s).
|
||||
|
||||
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||
or substituting -- in part or in whole -- any of the components of the
|
||||
Original Version, by changing formats or by porting the Font Software to a
|
||||
new environment.
|
||||
|
||||
"Author" refers to any designer, engineer, programmer, technical
|
||||
writer or other person who contributed to the Font Software.
|
||||
|
||||
PERMISSION & CONDITIONS
|
||||
Permission is hereby granted, free of charge, to any person obtaining
|
||||
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||
redistribute, and sell modified and unmodified copies of the Font
|
||||
Software, subject to the following conditions:
|
||||
|
||||
1) Neither the Font Software nor any of its individual components,
|
||||
in Original or Modified Versions, may be sold by itself.
|
||||
|
||||
2) Original or Modified Versions of the Font Software may be bundled,
|
||||
redistributed and/or sold with any software, provided that each copy
|
||||
contains the above copyright notice and this license. These can be
|
||||
included either as stand-alone text files, human-readable headers or
|
||||
in the appropriate machine-readable metadata fields within text or
|
||||
binary files as long as those fields can be easily viewed by the user.
|
||||
|
||||
3) No Modified Version of the Font Software may use the Reserved Font
|
||||
Name(s) unless explicit written permission is granted by the corresponding
|
||||
Copyright Holder. This restriction only applies to the primary font name as
|
||||
presented to the users.
|
||||
|
||||
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||
Software shall not be used to promote, endorse or advertise any
|
||||
Modified Version, except to acknowledge the contribution(s) of the
|
||||
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||
permission.
|
||||
|
||||
5) The Font Software, modified or unmodified, in part or in whole,
|
||||
must be distributed entirely under this license, and must not be
|
||||
distributed under any other license. The requirement for fonts to
|
||||
remain under this license does not apply to any document created
|
||||
using the Font Software.
|
||||
|
||||
TERMINATION
|
||||
This license becomes null and void if any of the above conditions are
|
||||
not met.
|
||||
|
||||
DISCLAIMER
|
||||
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -1,96 +0,0 @@
|
||||
-------------------------------
|
||||
UBUNTU FONT LICENCE Version 1.0
|
||||
-------------------------------
|
||||
|
||||
PREAMBLE
|
||||
This licence allows the licensed fonts to be used, studied, modified and
|
||||
redistributed freely. The fonts, including any derivative works, can be
|
||||
bundled, embedded, and redistributed provided the terms of this licence
|
||||
are met. The fonts and derivatives, however, cannot be released under
|
||||
any other licence. The requirement for fonts to remain under this
|
||||
licence does not require any document created using the fonts or their
|
||||
derivatives to be published under this licence, as long as the primary
|
||||
purpose of the document is not to be a vehicle for the distribution of
|
||||
the fonts.
|
||||
|
||||
DEFINITIONS
|
||||
"Font Software" refers to the set of files released by the Copyright
|
||||
Holder(s) under this licence and clearly marked as such. This may
|
||||
include source files, build scripts and documentation.
|
||||
|
||||
"Original Version" refers to the collection of Font Software components
|
||||
as received under this licence.
|
||||
|
||||
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||
or substituting -- in part or in whole -- any of the components of the
|
||||
Original Version, by changing formats or by porting the Font Software to
|
||||
a new environment.
|
||||
|
||||
"Copyright Holder(s)" refers to all individuals and companies who have a
|
||||
copyright ownership of the Font Software.
|
||||
|
||||
"Substantially Changed" refers to Modified Versions which can be easily
|
||||
identified as dissimilar to the Font Software by users of the Font
|
||||
Software comparing the Original Version with the Modified Version.
|
||||
|
||||
To "Propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification and with or without charging
|
||||
a redistribution fee), making available to the public, and in some
|
||||
countries other activities as well.
|
||||
|
||||
PERMISSION & CONDITIONS
|
||||
This licence does not grant any rights under trademark law and all such
|
||||
rights are reserved.
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a
|
||||
copy of the Font Software, to propagate the Font Software, subject to
|
||||
the below conditions:
|
||||
|
||||
1) Each copy of the Font Software must contain the above copyright
|
||||
notice and this licence. These can be included either as stand-alone
|
||||
text files, human-readable headers or in the appropriate machine-
|
||||
readable metadata fields within text or binary files as long as those
|
||||
fields can be easily viewed by the user.
|
||||
|
||||
2) The font name complies with the following:
|
||||
(a) The Original Version must retain its name, unmodified.
|
||||
(b) Modified Versions which are Substantially Changed must be renamed to
|
||||
avoid use of the name of the Original Version or similar names entirely.
|
||||
(c) Modified Versions which are not Substantially Changed must be
|
||||
renamed to both (i) retain the name of the Original Version and (ii) add
|
||||
additional naming elements to distinguish the Modified Version from the
|
||||
Original Version. The name of such Modified Versions must be the name of
|
||||
the Original Version, with "derivative X" where X represents the name of
|
||||
the new work, appended to that name.
|
||||
|
||||
3) The name(s) of the Copyright Holder(s) and any contributor to the
|
||||
Font Software shall not be used to promote, endorse or advertise any
|
||||
Modified Version, except (i) as required by this licence, (ii) to
|
||||
acknowledge the contribution(s) of the Copyright Holder(s) or (iii) with
|
||||
their explicit written permission.
|
||||
|
||||
4) The Font Software, modified or unmodified, in part or in whole, must
|
||||
be distributed entirely under this licence, and must not be distributed
|
||||
under any other licence. The requirement for fonts to remain under this
|
||||
licence does not affect any document created using the Font Software,
|
||||
except any version of the Font Software extracted from a document
|
||||
created using the Font Software may only be distributed under this
|
||||
licence.
|
||||
|
||||
TERMINATION
|
||||
This licence becomes null and void if any of the above conditions are
|
||||
not met.
|
||||
|
||||
DISCLAIMER
|
||||
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF
|
||||
COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM OTHER
|
||||
DEALINGS IN THE FONT SOFTWARE.
|
||||
Binary file not shown.
Binary file not shown.
1255
lib/EpdFont/builtinFonts/ubuntu_10.h
Normal file
1255
lib/EpdFont/builtinFonts/ubuntu_10.h
Normal file
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
1327
lib/EpdFont/builtinFonts/ubuntu_bold_10.h
Normal file
1327
lib/EpdFont/builtinFonts/ubuntu_bold_10.h
Normal file
File diff suppressed because it is too large
Load Diff
@ -1,137 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")/../builtinFonts"
|
||||
|
||||
echo "// The contents of this file are generated by ./lib/EpdFont/scripts/build-font-ids.sh"
|
||||
echo "#pragma once"
|
||||
echo ""
|
||||
|
||||
echo "#define BOOKERLY_12_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./bookerly_12_regular.h",
|
||||
"./bookerly_12_bold.h",
|
||||
"./bookerly_12_bolditalic.h",
|
||||
"./bookerly_12_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define BOOKERLY_14_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./bookerly_14_regular.h",
|
||||
"./bookerly_14_bold.h",
|
||||
"./bookerly_14_bolditalic.h",
|
||||
"./bookerly_14_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define BOOKERLY_16_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./bookerly_16_regular.h",
|
||||
"./bookerly_16_bold.h",
|
||||
"./bookerly_16_bolditalic.h",
|
||||
"./bookerly_16_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define BOOKERLY_18_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./bookerly_18_regular.h",
|
||||
"./bookerly_18_bold.h",
|
||||
"./bookerly_18_bolditalic.h",
|
||||
"./bookerly_18_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define NOTOSANS_12_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./notosans_12_regular.h",
|
||||
"./notosans_12_bold.h",
|
||||
"./notosans_12_bolditalic.h",
|
||||
"./notosans_12_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define NOTOSANS_14_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./notosans_14_regular.h",
|
||||
"./notosans_14_bold.h",
|
||||
"./notosans_14_bolditalic.h",
|
||||
"./notosans_14_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define NOTOSANS_16_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./notosans_16_regular.h",
|
||||
"./notosans_16_bold.h",
|
||||
"./notosans_16_bolditalic.h",
|
||||
"./notosans_16_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define NOTOSANS_18_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./notosans_18_regular.h",
|
||||
"./notosans_18_bold.h",
|
||||
"./notosans_18_bolditalic.h",
|
||||
"./notosans_18_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define OPENDYSLEXIC_8_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./opendyslexic_8_regular.h",
|
||||
"./opendyslexic_8_bold.h",
|
||||
"./opendyslexic_8_bolditalic.h",
|
||||
"./opendyslexic_8_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define OPENDYSLEXIC_10_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./opendyslexic_10_regular.h",
|
||||
"./opendyslexic_10_bold.h",
|
||||
"./opendyslexic_10_bolditalic.h",
|
||||
"./opendyslexic_10_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define OPENDYSLEXIC_12_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./opendyslexic_12_regular.h",
|
||||
"./opendyslexic_12_bold.h",
|
||||
"./opendyslexic_12_bolditalic.h",
|
||||
"./opendyslexic_12_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define OPENDYSLEXIC_14_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./opendyslexic_14_regular.h",
|
||||
"./opendyslexic_14_bold.h",
|
||||
"./opendyslexic_14_bolditalic.h",
|
||||
"./opendyslexic_14_italic.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define UI_10_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./ubuntu_10_regular.h",
|
||||
"./ubuntu_10_bold.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define UI_12_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./ubuntu_12_regular.h",
|
||||
"./ubuntu_12_bold.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
|
||||
echo "#define SMALL_FONT_ID ($(
|
||||
ruby -rdigest -e 'puts [
|
||||
"./notosans_8_regular.h",
|
||||
].map{|f| Digest::SHA256.hexdigest(File.read(f)).to_i(16) }.sum % (2 ** 32) - (2 ** 31)'
|
||||
))"
|
||||
@ -1,55 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -e
|
||||
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
READER_FONT_STYLES=("Regular" "Italic" "Bold" "BoldItalic")
|
||||
BOOKERLY_FONT_SIZES=(12 14 16 18)
|
||||
NOTOSANS_FONT_SIZES=(12 14 16 18)
|
||||
OPENDYSLEXIC_FONT_SIZES=(8 10 12 14)
|
||||
|
||||
for size in ${BOOKERLY_FONT_SIZES[@]}; do
|
||||
for style in ${READER_FONT_STYLES[@]}; do
|
||||
font_name="bookerly_${size}_$(echo $style | tr '[:upper:]' '[:lower:]')"
|
||||
font_path="../builtinFonts/source/Bookerly/Bookerly-${style}.ttf"
|
||||
output_path="../builtinFonts/${font_name}.h"
|
||||
python fontconvert.py $font_name $size $font_path --2bit > $output_path
|
||||
echo "Generated $output_path"
|
||||
done
|
||||
done
|
||||
|
||||
for size in ${NOTOSANS_FONT_SIZES[@]}; do
|
||||
for style in ${READER_FONT_STYLES[@]}; do
|
||||
font_name="notosans_${size}_$(echo $style | tr '[:upper:]' '[:lower:]')"
|
||||
font_path="../builtinFonts/source/NotoSans/NotoSans-${style}.ttf"
|
||||
output_path="../builtinFonts/${font_name}.h"
|
||||
python fontconvert.py $font_name $size $font_path --2bit > $output_path
|
||||
echo "Generated $output_path"
|
||||
done
|
||||
done
|
||||
|
||||
for size in ${OPENDYSLEXIC_FONT_SIZES[@]}; do
|
||||
for style in ${READER_FONT_STYLES[@]}; do
|
||||
font_name="opendyslexic_${size}_$(echo $style | tr '[:upper:]' '[:lower:]')"
|
||||
font_path="../builtinFonts/source/OpenDyslexic/OpenDyslexic-${style}.otf"
|
||||
output_path="../builtinFonts/${font_name}.h"
|
||||
python fontconvert.py $font_name $size $font_path --2bit > $output_path
|
||||
echo "Generated $output_path"
|
||||
done
|
||||
done
|
||||
|
||||
UI_FONT_SIZES=(10 12)
|
||||
UI_FONT_STYLES=("Regular" "Bold")
|
||||
|
||||
for size in ${UI_FONT_SIZES[@]}; do
|
||||
for style in ${UI_FONT_STYLES[@]}; do
|
||||
font_name="ubuntu_${size}_$(echo $style | tr '[:upper:]' '[:lower:]')"
|
||||
font_path="../builtinFonts/source/Ubuntu/Ubuntu-${style}.ttf"
|
||||
output_path="../builtinFonts/${font_name}.h"
|
||||
python fontconvert.py $font_name $size $font_path > $output_path
|
||||
echo "Generated $output_path"
|
||||
done
|
||||
done
|
||||
|
||||
python fontconvert.py notosans_8_regular 8 ../builtinFonts/source/NotoSans/NotoSans-Regular.ttf > ../builtinFonts/notosans_8_regular.h
|
||||
@ -3,12 +3,11 @@
|
||||
#include <FsHelpers.h>
|
||||
#include <HardwareSerial.h>
|
||||
#include <JpegToBmpConverter.h>
|
||||
#include <SDCardManager.h>
|
||||
#include <SD.h>
|
||||
#include <ZipFile.h>
|
||||
|
||||
#include "Epub/parsers/ContainerParser.h"
|
||||
#include "Epub/parsers/ContentOpfParser.h"
|
||||
#include "Epub/parsers/TocNavParser.h"
|
||||
#include "Epub/parsers/TocNcxParser.h"
|
||||
|
||||
bool Epub::findContentOpfFile(std::string* contentOpfFile) const {
|
||||
@ -61,6 +60,9 @@ bool Epub::parseContentOpf(BookMetadataCache::BookMetadata& bookMetadata) {
|
||||
}
|
||||
|
||||
ContentOpfParser opfParser(getCachePath(), getBasePath(), contentOpfSize, bookMetadataCache.get());
|
||||
Serial.printf("[%lu] [MEM] Free: %d bytes, Total: %d bytes, Min Free: %d bytes\n", millis(), ESP.getFreeHeap(),
|
||||
ESP.getHeapSize(), ESP.getMinFreeHeap());
|
||||
|
||||
if (!opfParser.setup()) {
|
||||
Serial.printf("[%lu] [EBP] Could not setup content.opf parser\n", millis());
|
||||
return false;
|
||||
@ -73,18 +75,14 @@ bool Epub::parseContentOpf(BookMetadataCache::BookMetadata& bookMetadata) {
|
||||
|
||||
// Grab data from opfParser into epub
|
||||
bookMetadata.title = opfParser.title;
|
||||
bookMetadata.author = opfParser.author;
|
||||
// TODO: Parse author
|
||||
bookMetadata.author = "";
|
||||
bookMetadata.coverItemHref = opfParser.coverItemHref;
|
||||
bookMetadata.textReferenceHref = opfParser.textReferenceHref;
|
||||
|
||||
if (!opfParser.tocNcxPath.empty()) {
|
||||
tocNcxItem = opfParser.tocNcxPath;
|
||||
}
|
||||
|
||||
if (!opfParser.tocNavPath.empty()) {
|
||||
tocNavItem = opfParser.tocNavPath;
|
||||
}
|
||||
|
||||
Serial.printf("[%lu] [EBP] Successfully parsed content.opf\n", millis());
|
||||
return true;
|
||||
}
|
||||
@ -99,13 +97,13 @@ bool Epub::parseTocNcxFile() const {
|
||||
Serial.printf("[%lu] [EBP] Parsing toc ncx file: %s\n", millis(), tocNcxItem.c_str());
|
||||
|
||||
const auto tmpNcxPath = getCachePath() + "/toc.ncx";
|
||||
FsFile tempNcxFile;
|
||||
if (!SdMan.openFileForWrite("EBP", tmpNcxPath, tempNcxFile)) {
|
||||
File tempNcxFile;
|
||||
if (!FsHelpers::openFileForWrite("EBP", tmpNcxPath, tempNcxFile)) {
|
||||
return false;
|
||||
}
|
||||
readItemContentsToStream(tocNcxItem, tempNcxFile, 1024);
|
||||
tempNcxFile.close();
|
||||
if (!SdMan.openFileForRead("EBP", tmpNcxPath, tempNcxFile)) {
|
||||
if (!FsHelpers::openFileForRead("EBP", tmpNcxPath, tempNcxFile)) {
|
||||
return false;
|
||||
}
|
||||
const auto ncxSize = tempNcxFile.size();
|
||||
@ -114,20 +112,17 @@ bool Epub::parseTocNcxFile() const {
|
||||
|
||||
if (!ncxParser.setup()) {
|
||||
Serial.printf("[%lu] [EBP] Could not setup toc ncx parser\n", millis());
|
||||
tempNcxFile.close();
|
||||
return false;
|
||||
}
|
||||
|
||||
const auto ncxBuffer = static_cast<uint8_t*>(malloc(1024));
|
||||
if (!ncxBuffer) {
|
||||
Serial.printf("[%lu] [EBP] Could not allocate memory for toc ncx parser\n", millis());
|
||||
tempNcxFile.close();
|
||||
return false;
|
||||
}
|
||||
|
||||
while (tempNcxFile.available()) {
|
||||
const auto readSize = tempNcxFile.read(ncxBuffer, 1024);
|
||||
if (readSize == 0) break;
|
||||
const auto processedSize = ncxParser.write(ncxBuffer, readSize);
|
||||
|
||||
if (processedSize != readSize) {
|
||||
@ -140,68 +135,14 @@ bool Epub::parseTocNcxFile() const {
|
||||
|
||||
free(ncxBuffer);
|
||||
tempNcxFile.close();
|
||||
SdMan.remove(tmpNcxPath.c_str());
|
||||
SD.remove(tmpNcxPath.c_str());
|
||||
|
||||
Serial.printf("[%lu] [EBP] Parsed TOC items\n", millis());
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Epub::parseTocNavFile() const {
|
||||
// the nav file should have been specified in the content.opf file (EPUB 3)
|
||||
if (tocNavItem.empty()) {
|
||||
Serial.printf("[%lu] [EBP] No nav file specified\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
Serial.printf("[%lu] [EBP] Parsing toc nav file: %s\n", millis(), tocNavItem.c_str());
|
||||
|
||||
const auto tmpNavPath = getCachePath() + "/toc.nav";
|
||||
FsFile tempNavFile;
|
||||
if (!SdMan.openFileForWrite("EBP", tmpNavPath, tempNavFile)) {
|
||||
return false;
|
||||
}
|
||||
readItemContentsToStream(tocNavItem, tempNavFile, 1024);
|
||||
tempNavFile.close();
|
||||
if (!SdMan.openFileForRead("EBP", tmpNavPath, tempNavFile)) {
|
||||
return false;
|
||||
}
|
||||
const auto navSize = tempNavFile.size();
|
||||
|
||||
TocNavParser navParser(contentBasePath, navSize, bookMetadataCache.get());
|
||||
|
||||
if (!navParser.setup()) {
|
||||
Serial.printf("[%lu] [EBP] Could not setup toc nav parser\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
const auto navBuffer = static_cast<uint8_t*>(malloc(1024));
|
||||
if (!navBuffer) {
|
||||
Serial.printf("[%lu] [EBP] Could not allocate memory for toc nav parser\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
while (tempNavFile.available()) {
|
||||
const auto readSize = tempNavFile.read(navBuffer, 1024);
|
||||
const auto processedSize = navParser.write(navBuffer, readSize);
|
||||
|
||||
if (processedSize != readSize) {
|
||||
Serial.printf("[%lu] [EBP] Could not process all toc nav data\n", millis());
|
||||
free(navBuffer);
|
||||
tempNavFile.close();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
free(navBuffer);
|
||||
tempNavFile.close();
|
||||
SdMan.remove(tmpNavPath.c_str());
|
||||
|
||||
Serial.printf("[%lu] [EBP] Parsed TOC nav items\n", millis());
|
||||
return true;
|
||||
}
|
||||
|
||||
// load in the meta data for the epub file
|
||||
bool Epub::load(const bool buildIfMissing) {
|
||||
bool Epub::load() {
|
||||
Serial.printf("[%lu] [EBP] Loading ePub: %s\n", millis(), filepath.c_str());
|
||||
|
||||
// Initialize spine/TOC cache
|
||||
@ -213,11 +154,6 @@ bool Epub::load(const bool buildIfMissing) {
|
||||
return true;
|
||||
}
|
||||
|
||||
// If we didn't load from cache above and we aren't allowed to build, fail now
|
||||
if (!buildIfMissing) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Cache doesn't exist or is invalid, build it
|
||||
Serial.printf("[%lu] [EBP] Cache not found, building spine/TOC cache\n", millis());
|
||||
setupCacheDir();
|
||||
@ -243,31 +179,15 @@ bool Epub::load(const bool buildIfMissing) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// TOC Pass - try EPUB 3 nav first, fall back to NCX
|
||||
// TOC Pass
|
||||
if (!bookMetadataCache->beginTocPass()) {
|
||||
Serial.printf("[%lu] [EBP] Could not begin writing toc pass\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
bool tocParsed = false;
|
||||
|
||||
// Try EPUB 3 nav document first (preferred)
|
||||
if (!tocNavItem.empty()) {
|
||||
Serial.printf("[%lu] [EBP] Attempting to parse EPUB 3 nav document\n", millis());
|
||||
tocParsed = parseTocNavFile();
|
||||
if (!parseTocNcxFile()) {
|
||||
Serial.printf("[%lu] [EBP] Could not parse toc\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
// Fall back to NCX if nav parsing failed or wasn't available
|
||||
if (!tocParsed && !tocNcxItem.empty()) {
|
||||
Serial.printf("[%lu] [EBP] Falling back to NCX TOC\n", millis());
|
||||
tocParsed = parseTocNcxFile();
|
||||
}
|
||||
|
||||
if (!tocParsed) {
|
||||
Serial.printf("[%lu] [EBP] Warning: Could not parse any TOC format\n", millis());
|
||||
// Continue anyway - book will work without TOC
|
||||
}
|
||||
|
||||
if (!bookMetadataCache->endTocPass()) {
|
||||
Serial.printf("[%lu] [EBP] Could not end writing toc pass\n", millis());
|
||||
return false;
|
||||
@ -301,12 +221,12 @@ bool Epub::load(const bool buildIfMissing) {
|
||||
}
|
||||
|
||||
bool Epub::clearCache() const {
|
||||
if (!SdMan.exists(cachePath.c_str())) {
|
||||
if (!SD.exists(cachePath.c_str())) {
|
||||
Serial.printf("[%lu] [EPB] Cache does not exist, no action needed\n", millis());
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!SdMan.removeDir(cachePath.c_str())) {
|
||||
if (!FsHelpers::removeDir(cachePath.c_str())) {
|
||||
Serial.printf("[%lu] [EPB] Failed to clear cache\n", millis());
|
||||
return false;
|
||||
}
|
||||
@ -316,11 +236,17 @@ bool Epub::clearCache() const {
|
||||
}
|
||||
|
||||
void Epub::setupCacheDir() const {
|
||||
if (SdMan.exists(cachePath.c_str())) {
|
||||
if (SD.exists(cachePath.c_str())) {
|
||||
return;
|
||||
}
|
||||
|
||||
SdMan.mkdir(cachePath.c_str());
|
||||
// Loop over each segment of the cache path and create directories as needed
|
||||
for (size_t i = 1; i < cachePath.length(); i++) {
|
||||
if (cachePath[i] == '/') {
|
||||
SD.mkdir(cachePath.substr(0, i).c_str());
|
||||
}
|
||||
}
|
||||
SD.mkdir(cachePath.c_str());
|
||||
}
|
||||
|
||||
const std::string& Epub::getCachePath() const { return cachePath; }
|
||||
@ -336,20 +262,11 @@ const std::string& Epub::getTitle() const {
|
||||
return bookMetadataCache->coreMetadata.title;
|
||||
}
|
||||
|
||||
const std::string& Epub::getAuthor() const {
|
||||
static std::string blank;
|
||||
if (!bookMetadataCache || !bookMetadataCache->isLoaded()) {
|
||||
return blank;
|
||||
}
|
||||
|
||||
return bookMetadataCache->coreMetadata.author;
|
||||
}
|
||||
|
||||
std::string Epub::getCoverBmpPath() const { return cachePath + "/cover.bmp"; }
|
||||
|
||||
bool Epub::generateCoverBmp() const {
|
||||
// Already generated, return true
|
||||
if (SdMan.exists(getCoverBmpPath().c_str())) {
|
||||
if (SD.exists(getCoverBmpPath().c_str())) {
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -369,30 +286,30 @@ bool Epub::generateCoverBmp() const {
|
||||
Serial.printf("[%lu] [EBP] Generating BMP from JPG cover image\n", millis());
|
||||
const auto coverJpgTempPath = getCachePath() + "/.cover.jpg";
|
||||
|
||||
FsFile coverJpg;
|
||||
if (!SdMan.openFileForWrite("EBP", coverJpgTempPath, coverJpg)) {
|
||||
File coverJpg;
|
||||
if (!FsHelpers::openFileForWrite("EBP", coverJpgTempPath, coverJpg)) {
|
||||
return false;
|
||||
}
|
||||
readItemContentsToStream(coverImageHref, coverJpg, 1024);
|
||||
coverJpg.close();
|
||||
|
||||
if (!SdMan.openFileForRead("EBP", coverJpgTempPath, coverJpg)) {
|
||||
if (!FsHelpers::openFileForRead("EBP", coverJpgTempPath, coverJpg)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
FsFile coverBmp;
|
||||
if (!SdMan.openFileForWrite("EBP", getCoverBmpPath(), coverBmp)) {
|
||||
File coverBmp;
|
||||
if (!FsHelpers::openFileForWrite("EBP", getCoverBmpPath(), coverBmp)) {
|
||||
coverJpg.close();
|
||||
return false;
|
||||
}
|
||||
const bool success = JpegToBmpConverter::jpegFileToBmpStream(coverJpg, coverBmp);
|
||||
coverJpg.close();
|
||||
coverBmp.close();
|
||||
SdMan.remove(coverJpgTempPath.c_str());
|
||||
SD.remove(coverJpgTempPath.c_str());
|
||||
|
||||
if (!success) {
|
||||
Serial.printf("[%lu] [EBP] Failed to generate BMP from JPG cover image\n", millis());
|
||||
SdMan.remove(getCoverBmpPath().c_str());
|
||||
SD.remove(getCoverBmpPath().c_str());
|
||||
}
|
||||
Serial.printf("[%lu] [EBP] Generated BMP from JPG cover image, success: %s\n", millis(), success ? "yes" : "no");
|
||||
return success;
|
||||
@ -404,14 +321,10 @@ bool Epub::generateCoverBmp() const {
|
||||
}
|
||||
|
||||
uint8_t* Epub::readItemContentsToBytes(const std::string& itemHref, size_t* size, const bool trailingNullByte) const {
|
||||
if (itemHref.empty()) {
|
||||
Serial.printf("[%lu] [EBP] Failed to read item, empty href\n", millis());
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
const ZipFile zip("/sd" + filepath);
|
||||
const std::string path = FsHelpers::normalisePath(itemHref);
|
||||
|
||||
const auto content = ZipFile(filepath).readFileToMemory(path.c_str(), size, trailingNullByte);
|
||||
const auto content = zip.readFileToMemory(path.c_str(), size, trailingNullByte);
|
||||
if (!content) {
|
||||
Serial.printf("[%lu] [EBP] Failed to read item %s\n", millis(), path.c_str());
|
||||
return nullptr;
|
||||
@ -421,18 +334,20 @@ uint8_t* Epub::readItemContentsToBytes(const std::string& itemHref, size_t* size
|
||||
}
|
||||
|
||||
bool Epub::readItemContentsToStream(const std::string& itemHref, Print& out, const size_t chunkSize) const {
|
||||
if (itemHref.empty()) {
|
||||
Serial.printf("[%lu] [EBP] Failed to read item, empty href\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
const ZipFile zip("/sd" + filepath);
|
||||
const std::string path = FsHelpers::normalisePath(itemHref);
|
||||
return ZipFile(filepath).readFileToStream(path.c_str(), out, chunkSize);
|
||||
|
||||
return zip.readFileToStream(path.c_str(), out, chunkSize);
|
||||
}
|
||||
|
||||
bool Epub::getItemSize(const std::string& itemHref, size_t* size) const {
|
||||
const ZipFile zip("/sd" + filepath);
|
||||
return getItemSize(zip, itemHref, size);
|
||||
}
|
||||
|
||||
bool Epub::getItemSize(const ZipFile& zip, const std::string& itemHref, size_t* size) {
|
||||
const std::string path = FsHelpers::normalisePath(itemHref);
|
||||
return ZipFile(filepath).getInflatedFileSize(path.c_str(), size);
|
||||
return zip.getInflatedFileSize(path.c_str(), size);
|
||||
}
|
||||
|
||||
int Epub::getSpineItemsCount() const {
|
||||
@ -510,35 +425,6 @@ size_t Epub::getBookSize() const {
|
||||
return getCumulativeSpineItemSize(getSpineItemsCount() - 1);
|
||||
}
|
||||
|
||||
int Epub::getSpineIndexForTextReference() const {
|
||||
if (!bookMetadataCache || !bookMetadataCache->isLoaded()) {
|
||||
Serial.printf("[%lu] [EBP] getSpineIndexForTextReference called but cache not loaded\n", millis());
|
||||
return 0;
|
||||
}
|
||||
Serial.printf("[%lu] [ERS] Core Metadata: cover(%d)=%s, textReference(%d)=%s\n", millis(),
|
||||
bookMetadataCache->coreMetadata.coverItemHref.size(),
|
||||
bookMetadataCache->coreMetadata.coverItemHref.c_str(),
|
||||
bookMetadataCache->coreMetadata.textReferenceHref.size(),
|
||||
bookMetadataCache->coreMetadata.textReferenceHref.c_str());
|
||||
|
||||
if (bookMetadataCache->coreMetadata.textReferenceHref.empty()) {
|
||||
// there was no textReference in epub, so we return 0 (the first chapter)
|
||||
return 0;
|
||||
}
|
||||
|
||||
// loop through spine items to get the correct index matching the text href
|
||||
for (size_t i = 0; i < getSpineItemsCount(); i++) {
|
||||
if (getSpineItem(i).href == bookMetadataCache->coreMetadata.textReferenceHref) {
|
||||
Serial.printf("[%lu] [ERS] Text reference %s found at index %d\n", millis(),
|
||||
bookMetadataCache->coreMetadata.textReferenceHref.c_str(), i);
|
||||
return i;
|
||||
}
|
||||
}
|
||||
// This should not happen, as we checked for empty textReferenceHref earlier
|
||||
Serial.printf("[%lu] [EBP] Section not found for text reference\n", millis());
|
||||
return 0;
|
||||
}
|
||||
|
||||
// Calculate progress in book
|
||||
uint8_t Epub::calculateProgress(const int currentSpineIndex, const float currentSpineRead) const {
|
||||
const size_t bookSize = getBookSize();
|
||||
|
||||
@ -1,7 +1,5 @@
|
||||
#pragma once
|
||||
|
||||
#include <Print.h>
|
||||
|
||||
#include <memory>
|
||||
#include <string>
|
||||
#include <unordered_map>
|
||||
@ -12,10 +10,8 @@
|
||||
class ZipFile;
|
||||
|
||||
class Epub {
|
||||
// the ncx file (EPUB 2)
|
||||
// the ncx file
|
||||
std::string tocNcxItem;
|
||||
// the nav file (EPUB 3)
|
||||
std::string tocNavItem;
|
||||
// where is the EPUBfile?
|
||||
std::string filepath;
|
||||
// the base path for items in the EPUB file
|
||||
@ -28,7 +24,7 @@ class Epub {
|
||||
bool findContentOpfFile(std::string* contentOpfFile) const;
|
||||
bool parseContentOpf(BookMetadataCache::BookMetadata& bookMetadata);
|
||||
bool parseTocNcxFile() const;
|
||||
bool parseTocNavFile() const;
|
||||
static bool getItemSize(const ZipFile& zip, const std::string& itemHref, size_t* size);
|
||||
|
||||
public:
|
||||
explicit Epub(std::string filepath, const std::string& cacheDir) : filepath(std::move(filepath)) {
|
||||
@ -37,13 +33,12 @@ class Epub {
|
||||
}
|
||||
~Epub() = default;
|
||||
std::string& getBasePath() { return contentBasePath; }
|
||||
bool load(bool buildIfMissing = true);
|
||||
bool load();
|
||||
bool clearCache() const;
|
||||
void setupCacheDir() const;
|
||||
const std::string& getCachePath() const;
|
||||
const std::string& getPath() const;
|
||||
const std::string& getTitle() const;
|
||||
const std::string& getAuthor() const;
|
||||
std::string getCoverBmpPath() const;
|
||||
bool generateCoverBmp() const;
|
||||
uint8_t* readItemContentsToBytes(const std::string& itemHref, size_t* size = nullptr,
|
||||
@ -57,8 +52,7 @@ class Epub {
|
||||
int getSpineIndexForTocIndex(int tocIndex) const;
|
||||
int getTocIndexForSpineIndex(int spineIndex) const;
|
||||
size_t getCumulativeSpineItemSize(int spineIndex) const;
|
||||
int getSpineIndexForTextReference() const;
|
||||
|
||||
size_t getBookSize() const;
|
||||
uint8_t calculateProgress(int currentSpineIndex, float currentSpineRead) const;
|
||||
uint8_t calculateProgress(const int currentSpineIndex, const float currentSpineRead) const;
|
||||
};
|
||||
|
||||
@ -1,6 +1,7 @@
|
||||
#include "BookMetadataCache.h"
|
||||
|
||||
#include <HardwareSerial.h>
|
||||
#include <SD.h>
|
||||
#include <Serialization.h>
|
||||
#include <ZipFile.h>
|
||||
|
||||
@ -9,7 +10,7 @@
|
||||
#include "FsHelpers.h"
|
||||
|
||||
namespace {
|
||||
constexpr uint8_t BOOK_CACHE_VERSION = 3;
|
||||
constexpr uint8_t BOOK_CACHE_VERSION = 1;
|
||||
constexpr char bookBinFile[] = "/book.bin";
|
||||
constexpr char tmpSpineBinFile[] = "/spine.bin.tmp";
|
||||
constexpr char tmpTocBinFile[] = "/toc.bin.tmp";
|
||||
@ -29,7 +30,7 @@ bool BookMetadataCache::beginContentOpfPass() {
|
||||
Serial.printf("[%lu] [BMC] Beginning content opf pass\n", millis());
|
||||
|
||||
// Open spine file for writing
|
||||
return SdMan.openFileForWrite("BMC", cachePath + tmpSpineBinFile, spineFile);
|
||||
return FsHelpers::openFileForWrite("BMC", cachePath + tmpSpineBinFile, spineFile);
|
||||
}
|
||||
|
||||
bool BookMetadataCache::endContentOpfPass() {
|
||||
@ -41,10 +42,10 @@ bool BookMetadataCache::beginTocPass() {
|
||||
Serial.printf("[%lu] [BMC] Beginning toc pass\n", millis());
|
||||
|
||||
// Open spine file for reading
|
||||
if (!SdMan.openFileForRead("BMC", cachePath + tmpSpineBinFile, spineFile)) {
|
||||
if (!FsHelpers::openFileForRead("BMC", cachePath + tmpSpineBinFile, spineFile)) {
|
||||
return false;
|
||||
}
|
||||
if (!SdMan.openFileForWrite("BMC", cachePath + tmpTocBinFile, tocFile)) {
|
||||
if (!FsHelpers::openFileForWrite("BMC", cachePath + tmpTocBinFile, tocFile)) {
|
||||
spineFile.close();
|
||||
return false;
|
||||
}
|
||||
@ -70,27 +71,27 @@ bool BookMetadataCache::endWrite() {
|
||||
|
||||
bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMetadata& metadata) {
|
||||
// Open all three files, writing to meta, reading from spine and toc
|
||||
if (!SdMan.openFileForWrite("BMC", cachePath + bookBinFile, bookFile)) {
|
||||
if (!FsHelpers::openFileForWrite("BMC", cachePath + bookBinFile, bookFile)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!SdMan.openFileForRead("BMC", cachePath + tmpSpineBinFile, spineFile)) {
|
||||
if (!FsHelpers::openFileForRead("BMC", cachePath + tmpSpineBinFile, spineFile)) {
|
||||
bookFile.close();
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!SdMan.openFileForRead("BMC", cachePath + tmpTocBinFile, tocFile)) {
|
||||
if (!FsHelpers::openFileForRead("BMC", cachePath + tmpTocBinFile, tocFile)) {
|
||||
bookFile.close();
|
||||
spineFile.close();
|
||||
return false;
|
||||
}
|
||||
|
||||
constexpr uint32_t headerASize =
|
||||
sizeof(BOOK_CACHE_VERSION) + /* LUT Offset */ sizeof(uint32_t) + sizeof(spineCount) + sizeof(tocCount);
|
||||
const uint32_t metadataSize = metadata.title.size() + metadata.author.size() + metadata.coverItemHref.size() +
|
||||
metadata.textReferenceHref.size() + sizeof(uint32_t) * 4;
|
||||
const uint32_t lutSize = sizeof(uint32_t) * spineCount + sizeof(uint32_t) * tocCount;
|
||||
const uint32_t lutOffset = headerASize + metadataSize;
|
||||
constexpr size_t headerASize =
|
||||
sizeof(BOOK_CACHE_VERSION) + /* LUT Offset */ sizeof(size_t) + sizeof(spineCount) + sizeof(tocCount);
|
||||
const size_t metadataSize =
|
||||
metadata.title.size() + metadata.author.size() + metadata.coverItemHref.size() + sizeof(uint32_t) * 3;
|
||||
const size_t lutSize = sizeof(size_t) * spineCount + sizeof(size_t) * tocCount;
|
||||
const size_t lutOffset = headerASize + metadataSize;
|
||||
|
||||
// Header A
|
||||
serialization::writePod(bookFile, BOOK_CACHE_VERSION);
|
||||
@ -101,12 +102,11 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
|
||||
serialization::writeString(bookFile, metadata.title);
|
||||
serialization::writeString(bookFile, metadata.author);
|
||||
serialization::writeString(bookFile, metadata.coverItemHref);
|
||||
serialization::writeString(bookFile, metadata.textReferenceHref);
|
||||
|
||||
// Loop through spine entries, writing LUT positions
|
||||
spineFile.seek(0);
|
||||
for (int i = 0; i < spineCount; i++) {
|
||||
uint32_t pos = spineFile.position();
|
||||
auto pos = spineFile.position();
|
||||
auto spineEntry = readSpineEntry(spineFile);
|
||||
serialization::writePod(bookFile, pos + lutOffset + lutSize);
|
||||
}
|
||||
@ -114,37 +114,17 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
|
||||
// Loop through toc entries, writing LUT positions
|
||||
tocFile.seek(0);
|
||||
for (int i = 0; i < tocCount; i++) {
|
||||
uint32_t pos = tocFile.position();
|
||||
auto pos = tocFile.position();
|
||||
auto tocEntry = readTocEntry(tocFile);
|
||||
serialization::writePod(bookFile, pos + lutOffset + lutSize + static_cast<uint32_t>(spineFile.position()));
|
||||
serialization::writePod(bookFile, pos + lutOffset + lutSize + spineFile.position());
|
||||
}
|
||||
|
||||
// LUTs complete
|
||||
// Loop through spines from spine file matching up TOC indexes, calculating cumulative size and writing to book.bin
|
||||
|
||||
ZipFile zip(epubPath);
|
||||
// Pre-open zip file to speed up size calculations
|
||||
if (!zip.open()) {
|
||||
Serial.printf("[%lu] [BMC] Could not open EPUB zip for size calculations\n", millis());
|
||||
bookFile.close();
|
||||
spineFile.close();
|
||||
tocFile.close();
|
||||
return false;
|
||||
}
|
||||
// TODO: For large ZIPs loading the all localHeaderOffsets will crash.
|
||||
// However not having them loaded is extremely slow. Need a better solution here.
|
||||
// Perhaps only a cache of spine items or a better way to speedup lookups?
|
||||
if (!zip.loadAllFileStatSlims()) {
|
||||
Serial.printf("[%lu] [BMC] Could not load zip local header offsets for size calculations\n", millis());
|
||||
bookFile.close();
|
||||
spineFile.close();
|
||||
tocFile.close();
|
||||
zip.close();
|
||||
return false;
|
||||
}
|
||||
uint32_t cumSize = 0;
|
||||
const ZipFile zip("/sd" + epubPath);
|
||||
size_t cumSize = 0;
|
||||
spineFile.seek(0);
|
||||
int lastSpineTocIndex = -1;
|
||||
for (int i = 0; i < spineCount; i++) {
|
||||
auto spineEntry = readSpineEntry(spineFile);
|
||||
|
||||
@ -160,12 +140,9 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
|
||||
// Not a huge deal if we don't fine a TOC entry for the spine entry, this is expected behaviour for EPUBs
|
||||
// Logging here is for debugging
|
||||
if (spineEntry.tocIndex == -1) {
|
||||
Serial.printf(
|
||||
"[%lu] [BMC] Warning: Could not find TOC entry for spine item %d: %s, using title from last section\n",
|
||||
millis(), i, spineEntry.href.c_str());
|
||||
spineEntry.tocIndex = lastSpineTocIndex;
|
||||
Serial.printf("[%lu] [BMC] Warning: Could not find TOC entry for spine item %d: %s\n", millis(), i,
|
||||
spineEntry.href.c_str());
|
||||
}
|
||||
lastSpineTocIndex = spineEntry.tocIndex;
|
||||
|
||||
// Calculate size for cumulative size
|
||||
size_t itemSize = 0;
|
||||
@ -180,8 +157,6 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
|
||||
// Write out spine data to book.bin
|
||||
writeSpineEntry(bookFile, spineEntry);
|
||||
}
|
||||
// Close opened zip file
|
||||
zip.close();
|
||||
|
||||
// Loop through toc entries from toc file writing to book.bin
|
||||
tocFile.seek(0);
|
||||
@ -199,25 +174,25 @@ bool BookMetadataCache::buildBookBin(const std::string& epubPath, const BookMeta
|
||||
}
|
||||
|
||||
bool BookMetadataCache::cleanupTmpFiles() const {
|
||||
if (SdMan.exists((cachePath + tmpSpineBinFile).c_str())) {
|
||||
SdMan.remove((cachePath + tmpSpineBinFile).c_str());
|
||||
if (SD.exists((cachePath + tmpSpineBinFile).c_str())) {
|
||||
SD.remove((cachePath + tmpSpineBinFile).c_str());
|
||||
}
|
||||
if (SdMan.exists((cachePath + tmpTocBinFile).c_str())) {
|
||||
SdMan.remove((cachePath + tmpTocBinFile).c_str());
|
||||
if (SD.exists((cachePath + tmpTocBinFile).c_str())) {
|
||||
SD.remove((cachePath + tmpTocBinFile).c_str());
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
uint32_t BookMetadataCache::writeSpineEntry(FsFile& file, const SpineEntry& entry) const {
|
||||
const uint32_t pos = file.position();
|
||||
size_t BookMetadataCache::writeSpineEntry(File& file, const SpineEntry& entry) const {
|
||||
const auto pos = file.position();
|
||||
serialization::writeString(file, entry.href);
|
||||
serialization::writePod(file, entry.cumulativeSize);
|
||||
serialization::writePod(file, entry.tocIndex);
|
||||
return pos;
|
||||
}
|
||||
|
||||
uint32_t BookMetadataCache::writeTocEntry(FsFile& file, const TocEntry& entry) const {
|
||||
const uint32_t pos = file.position();
|
||||
size_t BookMetadataCache::writeTocEntry(File& file, const TocEntry& entry) const {
|
||||
const auto pos = file.position();
|
||||
serialization::writeString(file, entry.title);
|
||||
serialization::writeString(file, entry.href);
|
||||
serialization::writeString(file, entry.anchor);
|
||||
@ -248,8 +223,6 @@ void BookMetadataCache::createTocEntry(const std::string& title, const std::stri
|
||||
|
||||
int spineIndex = -1;
|
||||
// find spine index
|
||||
// TODO: This lookup is slow as need to scan through all items each time. We can't hold it all in memory due to size.
|
||||
// But perhaps we can load just the hrefs in a vector/list to do an index lookup?
|
||||
spineFile.seek(0);
|
||||
for (int i = 0; i < spineCount; i++) {
|
||||
auto spineEntry = readSpineEntry(spineFile);
|
||||
@ -271,7 +244,7 @@ void BookMetadataCache::createTocEntry(const std::string& title, const std::stri
|
||||
/* ============= READING / LOADING FUNCTIONS ================ */
|
||||
|
||||
bool BookMetadataCache::load() {
|
||||
if (!SdMan.openFileForRead("BMC", cachePath + bookBinFile, bookFile)) {
|
||||
if (!FsHelpers::openFileForRead("BMC", cachePath + bookBinFile, bookFile)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
@ -290,7 +263,6 @@ bool BookMetadataCache::load() {
|
||||
serialization::readString(bookFile, coreMetadata.title);
|
||||
serialization::readString(bookFile, coreMetadata.author);
|
||||
serialization::readString(bookFile, coreMetadata.coverItemHref);
|
||||
serialization::readString(bookFile, coreMetadata.textReferenceHref);
|
||||
|
||||
loaded = true;
|
||||
Serial.printf("[%lu] [BMC] Loaded cache data: %d spine, %d TOC entries\n", millis(), spineCount, tocCount);
|
||||
@ -309,8 +281,8 @@ BookMetadataCache::SpineEntry BookMetadataCache::getSpineEntry(const int index)
|
||||
}
|
||||
|
||||
// Seek to spine LUT item, read from LUT and get out data
|
||||
bookFile.seek(lutOffset + sizeof(uint32_t) * index);
|
||||
uint32_t spineEntryPos;
|
||||
bookFile.seek(lutOffset + sizeof(size_t) * index);
|
||||
size_t spineEntryPos;
|
||||
serialization::readPod(bookFile, spineEntryPos);
|
||||
bookFile.seek(spineEntryPos);
|
||||
return readSpineEntry(bookFile);
|
||||
@ -328,14 +300,14 @@ BookMetadataCache::TocEntry BookMetadataCache::getTocEntry(const int index) {
|
||||
}
|
||||
|
||||
// Seek to TOC LUT item, read from LUT and get out data
|
||||
bookFile.seek(lutOffset + sizeof(uint32_t) * spineCount + sizeof(uint32_t) * index);
|
||||
uint32_t tocEntryPos;
|
||||
bookFile.seek(lutOffset + sizeof(size_t) * spineCount + sizeof(size_t) * index);
|
||||
size_t tocEntryPos;
|
||||
serialization::readPod(bookFile, tocEntryPos);
|
||||
bookFile.seek(tocEntryPos);
|
||||
return readTocEntry(bookFile);
|
||||
}
|
||||
|
||||
BookMetadataCache::SpineEntry BookMetadataCache::readSpineEntry(FsFile& file) const {
|
||||
BookMetadataCache::SpineEntry BookMetadataCache::readSpineEntry(File& file) const {
|
||||
SpineEntry entry;
|
||||
serialization::readString(file, entry.href);
|
||||
serialization::readPod(file, entry.cumulativeSize);
|
||||
@ -343,7 +315,7 @@ BookMetadataCache::SpineEntry BookMetadataCache::readSpineEntry(FsFile& file) co
|
||||
return entry;
|
||||
}
|
||||
|
||||
BookMetadataCache::TocEntry BookMetadataCache::readTocEntry(FsFile& file) const {
|
||||
BookMetadataCache::TocEntry BookMetadataCache::readTocEntry(File& file) const {
|
||||
TocEntry entry;
|
||||
serialization::readString(file, entry.title);
|
||||
serialization::readString(file, entry.href);
|
||||
|
||||
@ -1,6 +1,6 @@
|
||||
#pragma once
|
||||
|
||||
#include <SDCardManager.h>
|
||||
#include <SD.h>
|
||||
|
||||
#include <string>
|
||||
|
||||
@ -10,7 +10,6 @@ class BookMetadataCache {
|
||||
std::string title;
|
||||
std::string author;
|
||||
std::string coverItemHref;
|
||||
std::string textReferenceHref;
|
||||
};
|
||||
|
||||
struct SpineEntry {
|
||||
@ -47,15 +46,15 @@ class BookMetadataCache {
|
||||
bool loaded;
|
||||
bool buildMode;
|
||||
|
||||
FsFile bookFile;
|
||||
File bookFile;
|
||||
// Temp file handles during build
|
||||
FsFile spineFile;
|
||||
FsFile tocFile;
|
||||
File spineFile;
|
||||
File tocFile;
|
||||
|
||||
uint32_t writeSpineEntry(FsFile& file, const SpineEntry& entry) const;
|
||||
uint32_t writeTocEntry(FsFile& file, const TocEntry& entry) const;
|
||||
SpineEntry readSpineEntry(FsFile& file) const;
|
||||
TocEntry readTocEntry(FsFile& file) const;
|
||||
size_t writeSpineEntry(File& file, const SpineEntry& entry) const;
|
||||
size_t writeTocEntry(File& file, const TocEntry& entry) const;
|
||||
SpineEntry readSpineEntry(File& file) const;
|
||||
TocEntry readTocEntry(File& file) const;
|
||||
|
||||
public:
|
||||
BookMetadata coreMetadata;
|
||||
|
||||
92
lib/Epub/Epub/FsHelpers.cpp
Normal file
92
lib/Epub/Epub/FsHelpers.cpp
Normal file
@ -0,0 +1,92 @@
|
||||
#include "FsHelpers.h"
|
||||
|
||||
#include <SD.h>
|
||||
|
||||
#include <vector>
|
||||
|
||||
bool FsHelpers::openFileForRead(const char* moduleName, const std::string& path, File& file) {
|
||||
file = SD.open(path.c_str(), FILE_READ);
|
||||
if (!file) {
|
||||
Serial.printf("[%lu] [%s] Failed to open file for reading: %s\n", millis(), moduleName, path.c_str());
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool FsHelpers::openFileForWrite(const char* moduleName, const std::string& path, File& file) {
|
||||
file = SD.open(path.c_str(), FILE_WRITE, true);
|
||||
if (!file) {
|
||||
Serial.printf("[%lu] [%s] Failed to open file for writing: %s\n", millis(), moduleName, path.c_str());
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool FsHelpers::removeDir(const char* path) {
|
||||
// 1. Open the directory
|
||||
File dir = SD.open(path);
|
||||
if (!dir) {
|
||||
return false;
|
||||
}
|
||||
if (!dir.isDirectory()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
File file = dir.openNextFile();
|
||||
while (file) {
|
||||
String filePath = path;
|
||||
if (!filePath.endsWith("/")) {
|
||||
filePath += "/";
|
||||
}
|
||||
filePath += file.name();
|
||||
|
||||
if (file.isDirectory()) {
|
||||
if (!removeDir(filePath.c_str())) {
|
||||
return false;
|
||||
}
|
||||
} else {
|
||||
if (!SD.remove(filePath.c_str())) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
file = dir.openNextFile();
|
||||
}
|
||||
|
||||
return SD.rmdir(path);
|
||||
}
|
||||
|
||||
std::string FsHelpers::normalisePath(const std::string& path) {
|
||||
std::vector<std::string> components;
|
||||
std::string component;
|
||||
|
||||
for (const auto c : path) {
|
||||
if (c == '/') {
|
||||
if (!component.empty()) {
|
||||
if (component == "..") {
|
||||
if (!components.empty()) {
|
||||
components.pop_back();
|
||||
}
|
||||
} else {
|
||||
components.push_back(component);
|
||||
}
|
||||
component.clear();
|
||||
}
|
||||
} else {
|
||||
component += c;
|
||||
}
|
||||
}
|
||||
|
||||
if (!component.empty()) {
|
||||
components.push_back(component);
|
||||
}
|
||||
|
||||
std::string result;
|
||||
for (const auto& c : components) {
|
||||
if (!result.empty()) {
|
||||
result += "/";
|
||||
}
|
||||
result += c;
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
12
lib/Epub/Epub/FsHelpers.h
Normal file
12
lib/Epub/Epub/FsHelpers.h
Normal file
@ -0,0 +1,12 @@
|
||||
#pragma once
|
||||
#include <FS.h>
|
||||
|
||||
#include <string>
|
||||
|
||||
class FsHelpers {
|
||||
public:
|
||||
static bool openFileForRead(const char* moduleName, const std::string& path, File& file);
|
||||
static bool openFileForWrite(const char* moduleName, const std::string& path, File& file);
|
||||
static bool removeDir(const char* path);
|
||||
static std::string normalisePath(const std::string& path);
|
||||
};
|
||||
@ -3,19 +3,21 @@
|
||||
#include <HardwareSerial.h>
|
||||
#include <Serialization.h>
|
||||
|
||||
void PageLine::render(GfxRenderer& renderer, const int fontId, const int xOffset, const int yOffset) {
|
||||
block->render(renderer, fontId, xPos + xOffset, yPos + yOffset);
|
||||
namespace {
|
||||
constexpr uint8_t PAGE_FILE_VERSION = 3;
|
||||
}
|
||||
|
||||
bool PageLine::serialize(FsFile& file) {
|
||||
void PageLine::render(GfxRenderer& renderer, const int fontId) { block->render(renderer, fontId, xPos, yPos); }
|
||||
|
||||
void PageLine::serialize(File& file) {
|
||||
serialization::writePod(file, xPos);
|
||||
serialization::writePod(file, yPos);
|
||||
|
||||
// serialize TextBlock pointed to by PageLine
|
||||
return block->serialize(file);
|
||||
block->serialize(file);
|
||||
}
|
||||
|
||||
std::unique_ptr<PageLine> PageLine::deserialize(FsFile& file) {
|
||||
std::unique_ptr<PageLine> PageLine::deserialize(File& file) {
|
||||
int16_t xPos;
|
||||
int16_t yPos;
|
||||
serialization::readPod(file, xPos);
|
||||
@ -25,34 +27,39 @@ std::unique_ptr<PageLine> PageLine::deserialize(FsFile& file) {
|
||||
return std::unique_ptr<PageLine>(new PageLine(std::move(tb), xPos, yPos));
|
||||
}
|
||||
|
||||
void Page::render(GfxRenderer& renderer, const int fontId, const int xOffset, const int yOffset) const {
|
||||
void Page::render(GfxRenderer& renderer, const int fontId) const {
|
||||
for (auto& element : elements) {
|
||||
element->render(renderer, fontId, xOffset, yOffset);
|
||||
element->render(renderer, fontId);
|
||||
}
|
||||
}
|
||||
|
||||
bool Page::serialize(FsFile& file) const {
|
||||
const uint16_t count = elements.size();
|
||||
void Page::serialize(File& file) const {
|
||||
serialization::writePod(file, PAGE_FILE_VERSION);
|
||||
|
||||
const uint32_t count = elements.size();
|
||||
serialization::writePod(file, count);
|
||||
|
||||
for (const auto& el : elements) {
|
||||
// Only PageLine exists currently
|
||||
serialization::writePod(file, static_cast<uint8_t>(TAG_PageLine));
|
||||
if (!el->serialize(file)) {
|
||||
return false;
|
||||
}
|
||||
el->serialize(file);
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
std::unique_ptr<Page> Page::deserialize(FsFile& file) {
|
||||
std::unique_ptr<Page> Page::deserialize(File& file) {
|
||||
uint8_t version;
|
||||
serialization::readPod(file, version);
|
||||
if (version != PAGE_FILE_VERSION) {
|
||||
Serial.printf("[%lu] [PGE] Deserialization failed: Unknown version %u\n", millis(), version);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
auto page = std::unique_ptr<Page>(new Page());
|
||||
|
||||
uint16_t count;
|
||||
uint32_t count;
|
||||
serialization::readPod(file, count);
|
||||
|
||||
for (uint16_t i = 0; i < count; i++) {
|
||||
for (uint32_t i = 0; i < count; i++) {
|
||||
uint8_t tag;
|
||||
serialization::readPod(file, tag);
|
||||
|
||||
|
||||
@ -1,5 +1,5 @@
|
||||
#pragma once
|
||||
#include <SdFat.h>
|
||||
#include <FS.h>
|
||||
|
||||
#include <utility>
|
||||
#include <vector>
|
||||
@ -17,8 +17,8 @@ class PageElement {
|
||||
int16_t yPos;
|
||||
explicit PageElement(const int16_t xPos, const int16_t yPos) : xPos(xPos), yPos(yPos) {}
|
||||
virtual ~PageElement() = default;
|
||||
virtual void render(GfxRenderer& renderer, int fontId, int xOffset, int yOffset) = 0;
|
||||
virtual bool serialize(FsFile& file) = 0;
|
||||
virtual void render(GfxRenderer& renderer, int fontId) = 0;
|
||||
virtual void serialize(File& file) = 0;
|
||||
};
|
||||
|
||||
// a line from a block element
|
||||
@ -28,16 +28,16 @@ class PageLine final : public PageElement {
|
||||
public:
|
||||
PageLine(std::shared_ptr<TextBlock> block, const int16_t xPos, const int16_t yPos)
|
||||
: PageElement(xPos, yPos), block(std::move(block)) {}
|
||||
void render(GfxRenderer& renderer, int fontId, int xOffset, int yOffset) override;
|
||||
bool serialize(FsFile& file) override;
|
||||
static std::unique_ptr<PageLine> deserialize(FsFile& file);
|
||||
void render(GfxRenderer& renderer, int fontId) override;
|
||||
void serialize(File& file) override;
|
||||
static std::unique_ptr<PageLine> deserialize(File& file);
|
||||
};
|
||||
|
||||
class Page {
|
||||
public:
|
||||
// the list of block index and line numbers on this page
|
||||
std::vector<std::shared_ptr<PageElement>> elements;
|
||||
void render(GfxRenderer& renderer, int fontId, int xOffset, int yOffset) const;
|
||||
bool serialize(FsFile& file) const;
|
||||
static std::unique_ptr<Page> deserialize(FsFile& file);
|
||||
void render(GfxRenderer& renderer, int fontId) const;
|
||||
void serialize(File& file) const;
|
||||
static std::unique_ptr<Page> deserialize(File& file);
|
||||
};
|
||||
|
||||
@ -1,7 +1,6 @@
|
||||
#include "ParsedText.h"
|
||||
|
||||
#include <GfxRenderer.h>
|
||||
#include <Utf8.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <cmath>
|
||||
@ -10,7 +9,6 @@
|
||||
#include <limits>
|
||||
#include <vector>
|
||||
|
||||
#include "hyphenation/HyphenationCommon.h"
|
||||
#include "hyphenation/Hyphenator.h"
|
||||
|
||||
constexpr int MAX_COST = std::numeric_limits<int>::max();
|
||||
@ -20,46 +18,20 @@ namespace {
|
||||
struct HyphenSplitDecision {
|
||||
size_t byteOffset;
|
||||
uint16_t prefixWidth;
|
||||
bool appendHyphen; // true when we must draw an extra hyphen after the prefix glyphs
|
||||
};
|
||||
|
||||
// Verifies whether the substring ending at `offset` already contains a literal hyphen glyph, so we can avoid
|
||||
// drawing a duplicate hyphen when breaking the word.
|
||||
bool endsWithExplicitHyphen(const std::string& word, const size_t offset) {
|
||||
if (offset == 0 || offset > word.size()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const unsigned char* base = reinterpret_cast<const unsigned char*>(word.data());
|
||||
const unsigned char* ptr = base;
|
||||
const unsigned char* target = base + offset;
|
||||
const unsigned char* lastStart = nullptr;
|
||||
|
||||
while (ptr < target) {
|
||||
lastStart = ptr;
|
||||
utf8NextCodepoint(&ptr);
|
||||
if (ptr > target) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
if (!lastStart || ptr != target) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const unsigned char* tmp = lastStart;
|
||||
const uint32_t cp = utf8NextCodepoint(&tmp); // decode the codepoint immediately prior to the break
|
||||
return isExplicitHyphen(cp);
|
||||
}
|
||||
|
||||
bool chooseSplitForWidth(const GfxRenderer& renderer, const int fontId, const std::string& word,
|
||||
const EpdFontFamily::Style style, const int availableWidth, const bool includeFallback,
|
||||
const EpdFontStyle style, const int availableWidth, const bool includeFallback,
|
||||
HyphenSplitDecision* decision) {
|
||||
if (!decision || availableWidth <= 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
const int hyphenWidth = renderer.getTextWidth(fontId, "-", style);
|
||||
const int adjustedWidth = availableWidth - hyphenWidth;
|
||||
if (adjustedWidth <= 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
auto offsets = Hyphenator::breakOffsets(word, includeFallback);
|
||||
if (offsets.empty()) {
|
||||
@ -68,20 +40,13 @@ bool chooseSplitForWidth(const GfxRenderer& renderer, const int fontId, const st
|
||||
|
||||
size_t chosenOffset = std::numeric_limits<size_t>::max();
|
||||
uint16_t chosenWidth = 0;
|
||||
bool chosenAppendHyphen = true;
|
||||
|
||||
for (const size_t offset : offsets) {
|
||||
const bool needsInsertedHyphen = !endsWithExplicitHyphen(word, offset);
|
||||
const int budget = availableWidth - (needsInsertedHyphen ? hyphenWidth : 0);
|
||||
if (budget <= 0) {
|
||||
continue;
|
||||
}
|
||||
const std::string prefix = word.substr(0, offset);
|
||||
const int prefixWidth = renderer.getTextWidth(fontId, prefix.c_str(), style);
|
||||
if (prefixWidth <= budget) {
|
||||
if (prefixWidth <= adjustedWidth) {
|
||||
chosenOffset = offset;
|
||||
chosenWidth = static_cast<uint16_t>(prefixWidth + (needsInsertedHyphen ? hyphenWidth : 0));
|
||||
chosenAppendHyphen = needsInsertedHyphen;
|
||||
chosenWidth = static_cast<uint16_t>(prefixWidth + hyphenWidth);
|
||||
} else {
|
||||
break;
|
||||
}
|
||||
@ -93,13 +58,12 @@ bool chooseSplitForWidth(const GfxRenderer& renderer, const int fontId, const st
|
||||
|
||||
decision->byteOffset = chosenOffset;
|
||||
decision->prefixWidth = chosenWidth;
|
||||
decision->appendHyphen = chosenAppendHyphen;
|
||||
return true;
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
void ParsedText::addWord(std::string word, const EpdFontFamily::Style fontStyle) {
|
||||
void ParsedText::addWord(std::string word, const EpdFontStyle fontStyle) {
|
||||
if (word.empty()) return;
|
||||
|
||||
words.push_back(std::move(word));
|
||||
@ -107,14 +71,14 @@ void ParsedText::addWord(std::string word, const EpdFontFamily::Style fontStyle)
|
||||
}
|
||||
|
||||
// Consumes data to minimize memory usage
|
||||
void ParsedText::layoutAndExtractLines(const GfxRenderer& renderer, const int fontId, const uint16_t viewportWidth,
|
||||
void ParsedText::layoutAndExtractLines(const GfxRenderer& renderer, const int fontId, const int horizontalMargin,
|
||||
const std::function<void(std::shared_ptr<TextBlock>)>& processLine,
|
||||
const bool includeLastLine) {
|
||||
if (words.empty()) {
|
||||
return;
|
||||
}
|
||||
|
||||
const int pageWidth = viewportWidth;
|
||||
const int pageWidth = renderer.getScreenWidth() - horizontalMargin;
|
||||
const int spaceWidth = renderer.getSpaceWidth(fontId);
|
||||
// Pre-split oversized tokens so the DP step always has feasible line candidates.
|
||||
auto wordWidths = calculateWordWidths(renderer, fontId, pageWidth);
|
||||
@ -146,17 +110,14 @@ std::vector<uint16_t> ParsedText::calculateWordWidths(const GfxRenderer& rendere
|
||||
uint16_t width = renderer.getTextWidth(fontId, wordsIt->c_str(), *wordStylesIt);
|
||||
|
||||
if (width > pageWidth) {
|
||||
HyphenSplitDecision decision{};
|
||||
HyphenSplitDecision decision;
|
||||
if (chooseSplitForWidth(renderer, fontId, *wordsIt, *wordStylesIt, pageWidth, true, &decision)) {
|
||||
const std::string originalWord = *wordsIt;
|
||||
const std::string tail = originalWord.substr(decision.byteOffset);
|
||||
if (tail.empty()) {
|
||||
continue;
|
||||
}
|
||||
std::string prefix = originalWord.substr(0, decision.byteOffset);
|
||||
if (decision.appendHyphen) {
|
||||
prefix += "-";
|
||||
}
|
||||
const std::string prefix = originalWord.substr(0, decision.byteOffset) + "-";
|
||||
|
||||
*wordsIt = prefix;
|
||||
auto nextWordIt = words.insert(std::next(wordsIt), tail);
|
||||
@ -274,7 +235,7 @@ std::vector<size_t> ParsedText::computeLineBreaks(const GfxRenderer& renderer, c
|
||||
}
|
||||
|
||||
const int availableWidth = pageWidth - lineWidth - interWordSpace;
|
||||
HyphenSplitDecision decision{};
|
||||
HyphenSplitDecision decision;
|
||||
if (!chooseSplitForWidth(renderer, fontId, *wordNodeIt, *styleNodeIt, availableWidth, false, &decision)) {
|
||||
break;
|
||||
}
|
||||
@ -284,12 +245,9 @@ std::vector<size_t> ParsedText::computeLineBreaks(const GfxRenderer& renderer, c
|
||||
if (tail.empty()) {
|
||||
break;
|
||||
}
|
||||
std::string prefix = originalWord.substr(0, decision.byteOffset);
|
||||
if (decision.appendHyphen) {
|
||||
prefix += "-";
|
||||
}
|
||||
const std::string prefix = originalWord.substr(0, decision.byteOffset) + "-";
|
||||
|
||||
const EpdFontFamily::Style styleForSplit = *styleNodeIt;
|
||||
const EpdFontStyle styleForSplit = *styleNodeIt;
|
||||
*wordNodeIt = tail;
|
||||
words.insert(wordNodeIt, prefix);
|
||||
wordStyles.insert(styleNodeIt, styleForSplit);
|
||||
@ -364,7 +322,7 @@ void ParsedText::extractLine(const size_t breakIndex, const int pageWidth, const
|
||||
|
||||
std::list<std::string> lineWords;
|
||||
lineWords.splice(lineWords.begin(), words, words.begin(), wordEndIt);
|
||||
std::list<EpdFontFamily::Style> lineWordStyles;
|
||||
std::list<EpdFontStyle> lineWordStyles;
|
||||
lineWordStyles.splice(lineWordStyles.begin(), wordStyles, wordStyles.begin(), wordStyleEndIt);
|
||||
|
||||
processLine(std::make_shared<TextBlock>(std::move(lineWords), std::move(lineXPos), std::move(lineWordStyles), style));
|
||||
|
||||
@ -14,8 +14,8 @@ class GfxRenderer;
|
||||
|
||||
class ParsedText {
|
||||
std::list<std::string> words;
|
||||
std::list<EpdFontFamily::Style> wordStyles;
|
||||
TextBlock::Style style;
|
||||
std::list<EpdFontStyle> wordStyles;
|
||||
TextBlock::BLOCK_STYLE style;
|
||||
bool extraParagraphSpacing;
|
||||
bool hyphenationEnabled;
|
||||
|
||||
@ -27,16 +27,17 @@ class ParsedText {
|
||||
std::vector<uint16_t> calculateWordWidths(const GfxRenderer& renderer, int fontId, int pageWidth);
|
||||
|
||||
public:
|
||||
explicit ParsedText(const TextBlock::Style style, const bool extraParagraphSpacing, const bool hyphenationEnabled)
|
||||
explicit ParsedText(const TextBlock::BLOCK_STYLE style, const bool extraParagraphSpacing,
|
||||
const bool hyphenationEnabled)
|
||||
: style(style), extraParagraphSpacing(extraParagraphSpacing), hyphenationEnabled(hyphenationEnabled) {}
|
||||
~ParsedText() = default;
|
||||
|
||||
void addWord(std::string word, EpdFontFamily::Style fontStyle);
|
||||
void setStyle(const TextBlock::Style style) { this->style = style; }
|
||||
TextBlock::Style getStyle() const { return style; }
|
||||
void addWord(std::string word, EpdFontStyle fontStyle);
|
||||
void setStyle(const TextBlock::BLOCK_STYLE style) { this->style = style; }
|
||||
TextBlock::BLOCK_STYLE getStyle() const { return style; }
|
||||
size_t size() const { return words.size(); }
|
||||
bool isEmpty() const { return words.empty(); }
|
||||
void layoutAndExtractLines(const GfxRenderer& renderer, int fontId, uint16_t viewportWidth,
|
||||
void layoutAndExtractLines(const GfxRenderer& renderer, int fontId, int horizontalMargin,
|
||||
const std::function<void(std::shared_ptr<TextBlock>)>& processLine,
|
||||
bool includeLastLine = true);
|
||||
};
|
||||
|
||||
@ -1,114 +1,113 @@
|
||||
#include "Section.h"
|
||||
|
||||
#include <SDCardManager.h>
|
||||
#include <FsHelpers.h>
|
||||
#include <SD.h>
|
||||
#include <Serialization.h>
|
||||
|
||||
#include "Page.h"
|
||||
#include "parsers/ChapterHtmlSlimParser.h"
|
||||
|
||||
namespace {
|
||||
constexpr uint8_t SECTION_FILE_VERSION = 9;
|
||||
constexpr uint32_t HEADER_SIZE = sizeof(uint8_t) + sizeof(int) + sizeof(float) + sizeof(bool) + sizeof(uint8_t) +
|
||||
sizeof(uint16_t) + sizeof(uint16_t) + sizeof(uint16_t) + sizeof(uint32_t);
|
||||
} // namespace
|
||||
constexpr uint8_t SECTION_FILE_VERSION = 6;
|
||||
}
|
||||
|
||||
uint32_t Section::onPageComplete(std::unique_ptr<Page> page) {
|
||||
if (!file) {
|
||||
Serial.printf("[%lu] [SCT] File not open for writing page %d\n", millis(), pageCount);
|
||||
return 0;
|
||||
}
|
||||
void Section::onPageComplete(std::unique_ptr<Page> page) {
|
||||
const auto filePath = cachePath + "/page_" + std::to_string(pageCount) + ".bin";
|
||||
|
||||
const uint32_t position = file.position();
|
||||
if (!page->serialize(file)) {
|
||||
Serial.printf("[%lu] [SCT] Failed to serialize page %d\n", millis(), pageCount);
|
||||
return 0;
|
||||
File outputFile;
|
||||
if (!FsHelpers::openFileForWrite("SCT", filePath, outputFile)) {
|
||||
return;
|
||||
}
|
||||
page->serialize(outputFile);
|
||||
outputFile.close();
|
||||
|
||||
Serial.printf("[%lu] [SCT] Page %d processed\n", millis(), pageCount);
|
||||
|
||||
pageCount++;
|
||||
return position;
|
||||
}
|
||||
|
||||
void Section::writeSectionFileHeader(const int fontId, const float lineCompression, const bool extraParagraphSpacing,
|
||||
const uint8_t paragraphAlignment, const uint16_t viewportWidth,
|
||||
const uint16_t viewportHeight, const bool hyphenationEnabled) {
|
||||
if (!file) {
|
||||
Serial.printf("[%lu] [SCT] File not open for writing header\n", millis());
|
||||
void Section::writeCacheMetadata(const int fontId, const float lineCompression, const int marginTop,
|
||||
const int marginRight, const int marginBottom, const int marginLeft,
|
||||
const bool extraParagraphSpacing, const bool hyphenationEnabled) const {
|
||||
File outputFile;
|
||||
if (!FsHelpers::openFileForWrite("SCT", cachePath + "/section.bin", outputFile)) {
|
||||
return;
|
||||
}
|
||||
static_assert(HEADER_SIZE == sizeof(SECTION_FILE_VERSION) + sizeof(fontId) + sizeof(lineCompression) +
|
||||
sizeof(extraParagraphSpacing) + sizeof(paragraphAlignment) + sizeof(viewportWidth) +
|
||||
sizeof(viewportHeight) + sizeof(pageCount) + sizeof(uint32_t),
|
||||
"Header size mismatch");
|
||||
serialization::writePod(file, SECTION_FILE_VERSION);
|
||||
serialization::writePod(file, fontId);
|
||||
serialization::writePod(file, lineCompression);
|
||||
serialization::writePod(file, extraParagraphSpacing);
|
||||
serialization::writePod(file, paragraphAlignment);
|
||||
serialization::writePod(file, viewportWidth);
|
||||
serialization::writePod(file, viewportHeight);
|
||||
serialization::writePod(file, pageCount); // Placeholder for page count (will be initially 0 when written)
|
||||
serialization::writePod(file, hyphenationEnabled);
|
||||
serialization::writePod(file, static_cast<uint32_t>(0)); // Placeholder for LUT offset
|
||||
serialization::writePod(outputFile, SECTION_FILE_VERSION);
|
||||
serialization::writePod(outputFile, fontId);
|
||||
serialization::writePod(outputFile, lineCompression);
|
||||
serialization::writePod(outputFile, marginTop);
|
||||
serialization::writePod(outputFile, marginRight);
|
||||
serialization::writePod(outputFile, marginBottom);
|
||||
serialization::writePod(outputFile, marginLeft);
|
||||
serialization::writePod(outputFile, extraParagraphSpacing);
|
||||
serialization::writePod(outputFile, hyphenationEnabled);
|
||||
serialization::writePod(outputFile, pageCount);
|
||||
outputFile.close();
|
||||
}
|
||||
|
||||
bool Section::loadSectionFile(const int fontId, const float lineCompression, const bool extraParagraphSpacing,
|
||||
const uint8_t paragraphAlignment, const uint16_t viewportWidth,
|
||||
const uint16_t viewportHeight, const bool hyphenationEnabled) {
|
||||
if (!SdMan.openFileForRead("SCT", filePath, file)) {
|
||||
bool Section::loadCacheMetadata(const int fontId, const float lineCompression, const int marginTop,
|
||||
const int marginRight, const int marginBottom, const int marginLeft,
|
||||
const bool extraParagraphSpacing, const bool hyphenationEnabled) {
|
||||
const auto sectionFilePath = cachePath + "/section.bin";
|
||||
File inputFile;
|
||||
if (!FsHelpers::openFileForRead("SCT", sectionFilePath, inputFile)) {
|
||||
return false;
|
||||
}
|
||||
|
||||
// Match parameters
|
||||
{
|
||||
uint8_t version;
|
||||
serialization::readPod(file, version);
|
||||
serialization::readPod(inputFile, version);
|
||||
if (version != SECTION_FILE_VERSION) {
|
||||
file.close();
|
||||
inputFile.close();
|
||||
Serial.printf("[%lu] [SCT] Deserialization failed: Unknown version %u\n", millis(), version);
|
||||
clearCache();
|
||||
return false;
|
||||
}
|
||||
|
||||
int fileFontId;
|
||||
uint16_t fileViewportWidth, fileViewportHeight;
|
||||
int fileFontId, fileMarginTop, fileMarginRight, fileMarginBottom, fileMarginLeft;
|
||||
float fileLineCompression;
|
||||
bool fileExtraParagraphSpacing;
|
||||
uint8_t fileParagraphAlignment;
|
||||
bool fileHyphenationEnabled;
|
||||
serialization::readPod(file, fileFontId);
|
||||
serialization::readPod(file, fileLineCompression);
|
||||
serialization::readPod(file, fileExtraParagraphSpacing);
|
||||
serialization::readPod(file, fileParagraphAlignment);
|
||||
serialization::readPod(file, fileViewportWidth);
|
||||
serialization::readPod(file, fileViewportHeight);
|
||||
serialization::readPod(file, fileHyphenationEnabled);
|
||||
serialization::readPod(inputFile, fileFontId);
|
||||
serialization::readPod(inputFile, fileLineCompression);
|
||||
serialization::readPod(inputFile, fileMarginTop);
|
||||
serialization::readPod(inputFile, fileMarginRight);
|
||||
serialization::readPod(inputFile, fileMarginBottom);
|
||||
serialization::readPod(inputFile, fileMarginLeft);
|
||||
serialization::readPod(inputFile, fileExtraParagraphSpacing);
|
||||
serialization::readPod(inputFile, fileHyphenationEnabled);
|
||||
|
||||
if (fontId != fileFontId || lineCompression != fileLineCompression ||
|
||||
extraParagraphSpacing != fileExtraParagraphSpacing || paragraphAlignment != fileParagraphAlignment ||
|
||||
viewportWidth != fileViewportWidth || viewportHeight != fileViewportHeight ||
|
||||
hyphenationEnabled != fileHyphenationEnabled) {
|
||||
file.close();
|
||||
if (fontId != fileFontId || lineCompression != fileLineCompression || marginTop != fileMarginTop ||
|
||||
marginRight != fileMarginRight || marginBottom != fileMarginBottom || marginLeft != fileMarginLeft ||
|
||||
extraParagraphSpacing != fileExtraParagraphSpacing || hyphenationEnabled != fileHyphenationEnabled) {
|
||||
inputFile.close();
|
||||
Serial.printf("[%lu] [SCT] Deserialization failed: Parameters do not match\n", millis());
|
||||
clearCache();
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
serialization::readPod(file, pageCount);
|
||||
file.close();
|
||||
serialization::readPod(inputFile, pageCount);
|
||||
inputFile.close();
|
||||
Serial.printf("[%lu] [SCT] Deserialization succeeded: %d pages\n", millis(), pageCount);
|
||||
return true;
|
||||
}
|
||||
|
||||
void Section::setupCacheDir() const {
|
||||
epub->setupCacheDir();
|
||||
SD.mkdir(cachePath.c_str());
|
||||
}
|
||||
|
||||
// Your updated class method (assuming you are using the 'SD' object, which is a wrapper for a specific filesystem)
|
||||
bool Section::clearCache() const {
|
||||
if (!SdMan.exists(filePath.c_str())) {
|
||||
if (!SD.exists(cachePath.c_str())) {
|
||||
Serial.printf("[%lu] [SCT] Cache does not exist, no action needed\n", millis());
|
||||
return true;
|
||||
}
|
||||
|
||||
if (!SdMan.remove(filePath.c_str())) {
|
||||
if (!FsHelpers::removeDir(cachePath.c_str())) {
|
||||
Serial.printf("[%lu] [SCT] Failed to clear cache\n", millis());
|
||||
return false;
|
||||
}
|
||||
@ -117,123 +116,50 @@ bool Section::clearCache() const {
|
||||
return true;
|
||||
}
|
||||
|
||||
bool Section::createSectionFile(const int fontId, const float lineCompression, const bool extraParagraphSpacing,
|
||||
const uint8_t paragraphAlignment, const uint16_t viewportWidth,
|
||||
const uint16_t viewportHeight, const std::function<void()>& progressSetupFn,
|
||||
const std::function<void(int)>& progressFn, const bool hyphenationEnabled) {
|
||||
constexpr uint32_t MIN_SIZE_FOR_PROGRESS = 50 * 1024; // 50KB
|
||||
bool Section::persistPageDataToSD(const int fontId, const float lineCompression, const int marginTop,
|
||||
const int marginRight, const int marginBottom, const int marginLeft,
|
||||
const bool extraParagraphSpacing, const bool hyphenationEnabled) {
|
||||
const auto localPath = epub->getSpineItem(spineIndex).href;
|
||||
const auto tmpHtmlPath = epub->getCachePath() + "/.tmp_" + std::to_string(spineIndex) + ".html";
|
||||
|
||||
// Create cache directory if it doesn't exist
|
||||
{
|
||||
const auto sectionsDir = epub->getCachePath() + "/sections";
|
||||
SdMan.mkdir(sectionsDir.c_str());
|
||||
}
|
||||
|
||||
// Retry logic for SD card timing issues
|
||||
bool success = false;
|
||||
uint32_t fileSize = 0;
|
||||
for (int attempt = 0; attempt < 3 && !success; attempt++) {
|
||||
if (attempt > 0) {
|
||||
Serial.printf("[%lu] [SCT] Retrying stream (attempt %d)...\n", millis(), attempt + 1);
|
||||
delay(50); // Brief delay before retry
|
||||
}
|
||||
|
||||
// Remove any incomplete file from previous attempt before retrying
|
||||
if (SdMan.exists(tmpHtmlPath.c_str())) {
|
||||
SdMan.remove(tmpHtmlPath.c_str());
|
||||
}
|
||||
|
||||
FsFile tmpHtml;
|
||||
if (!SdMan.openFileForWrite("SCT", tmpHtmlPath, tmpHtml)) {
|
||||
continue;
|
||||
}
|
||||
success = epub->readItemContentsToStream(localPath, tmpHtml, 1024);
|
||||
fileSize = tmpHtml.size();
|
||||
tmpHtml.close();
|
||||
|
||||
// If streaming failed, remove the incomplete file immediately
|
||||
if (!success && SdMan.exists(tmpHtmlPath.c_str())) {
|
||||
SdMan.remove(tmpHtmlPath.c_str());
|
||||
Serial.printf("[%lu] [SCT] Removed incomplete temp file after failed attempt\n", millis());
|
||||
}
|
||||
File tmpHtml;
|
||||
if (!FsHelpers::openFileForWrite("SCT", tmpHtmlPath, tmpHtml)) {
|
||||
return false;
|
||||
}
|
||||
bool success = epub->readItemContentsToStream(localPath, tmpHtml, 1024);
|
||||
tmpHtml.close();
|
||||
|
||||
if (!success) {
|
||||
Serial.printf("[%lu] [SCT] Failed to stream item contents to temp file after retries\n", millis());
|
||||
Serial.printf("[%lu] [SCT] Failed to stream item contents to temp file\n", millis());
|
||||
return false;
|
||||
}
|
||||
|
||||
Serial.printf("[%lu] [SCT] Streamed temp HTML to %s (%d bytes)\n", millis(), tmpHtmlPath.c_str(), fileSize);
|
||||
Serial.printf("[%lu] [SCT] Streamed temp HTML to %s\n", millis(), tmpHtmlPath.c_str());
|
||||
|
||||
// Only show progress bar for larger chapters where rendering overhead is worth it
|
||||
if (progressSetupFn && fileSize >= MIN_SIZE_FOR_PROGRESS) {
|
||||
progressSetupFn();
|
||||
}
|
||||
|
||||
if (!SdMan.openFileForWrite("SCT", filePath, file)) {
|
||||
return false;
|
||||
}
|
||||
writeSectionFileHeader(fontId, lineCompression, extraParagraphSpacing, paragraphAlignment, viewportWidth,
|
||||
viewportHeight, hyphenationEnabled);
|
||||
std::vector<uint32_t> lut = {};
|
||||
|
||||
ChapterHtmlSlimParser visitor(
|
||||
tmpHtmlPath, renderer, fontId, lineCompression, extraParagraphSpacing, hyphenationEnabled, paragraphAlignment,
|
||||
viewportWidth, viewportHeight,
|
||||
[this, &lut](std::unique_ptr<Page> page) { lut.emplace_back(this->onPageComplete(std::move(page))); },
|
||||
progressFn);
|
||||
ChapterHtmlSlimParser visitor(tmpHtmlPath, renderer, fontId, lineCompression, marginTop, marginRight, marginBottom,
|
||||
marginLeft, extraParagraphSpacing, hyphenationEnabled,
|
||||
[this](std::unique_ptr<Page> page) { this->onPageComplete(std::move(page)); });
|
||||
success = visitor.parseAndBuildPages();
|
||||
|
||||
SdMan.remove(tmpHtmlPath.c_str());
|
||||
SD.remove(tmpHtmlPath.c_str());
|
||||
if (!success) {
|
||||
Serial.printf("[%lu] [SCT] Failed to parse XML and build pages\n", millis());
|
||||
file.close();
|
||||
SdMan.remove(filePath.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
const uint32_t lutOffset = file.position();
|
||||
bool hasFailedLutRecords = false;
|
||||
// Write LUT
|
||||
for (const uint32_t& pos : lut) {
|
||||
if (pos == 0) {
|
||||
hasFailedLutRecords = true;
|
||||
break;
|
||||
}
|
||||
serialization::writePod(file, pos);
|
||||
}
|
||||
writeCacheMetadata(fontId, lineCompression, marginTop, marginRight, marginBottom, marginLeft, extraParagraphSpacing,
|
||||
hyphenationEnabled);
|
||||
|
||||
if (hasFailedLutRecords) {
|
||||
Serial.printf("[%lu] [SCT] Failed to write LUT due to invalid page positions\n", millis());
|
||||
file.close();
|
||||
SdMan.remove(filePath.c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
// Go back and write LUT offset
|
||||
file.seek(HEADER_SIZE - sizeof(uint32_t) - sizeof(pageCount));
|
||||
serialization::writePod(file, pageCount);
|
||||
serialization::writePod(file, lutOffset);
|
||||
file.close();
|
||||
return true;
|
||||
}
|
||||
|
||||
std::unique_ptr<Page> Section::loadPageFromSectionFile() {
|
||||
if (!SdMan.openFileForRead("SCT", filePath, file)) {
|
||||
std::unique_ptr<Page> Section::loadPageFromSD() const {
|
||||
const auto filePath = cachePath + "/page_" + std::to_string(currentPage) + ".bin";
|
||||
|
||||
File inputFile;
|
||||
if (!FsHelpers::openFileForRead("SCT", filePath, inputFile)) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
file.seek(HEADER_SIZE - sizeof(uint32_t));
|
||||
uint32_t lutOffset;
|
||||
serialization::readPod(file, lutOffset);
|
||||
file.seek(lutOffset + sizeof(uint32_t) * currentPage);
|
||||
uint32_t pagePos;
|
||||
serialization::readPod(file, pagePos);
|
||||
file.seek(pagePos);
|
||||
|
||||
auto page = Page::deserialize(file);
|
||||
file.close();
|
||||
auto page = Page::deserialize(inputFile);
|
||||
inputFile.close();
|
||||
return page;
|
||||
}
|
||||
|
||||
@ -1,5 +1,4 @@
|
||||
#pragma once
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
|
||||
#include "Epub.h"
|
||||
@ -11,29 +10,27 @@ class Section {
|
||||
std::shared_ptr<Epub> epub;
|
||||
const int spineIndex;
|
||||
GfxRenderer& renderer;
|
||||
std::string filePath;
|
||||
FsFile file;
|
||||
std::string cachePath;
|
||||
|
||||
void writeSectionFileHeader(int fontId, float lineCompression, bool extraParagraphSpacing, uint8_t paragraphAlignment,
|
||||
uint16_t viewportWidth, uint16_t viewportHeight, bool hyphenationEnabled);
|
||||
uint32_t onPageComplete(std::unique_ptr<Page> page);
|
||||
void writeCacheMetadata(int fontId, float lineCompression, int marginTop, int marginRight, int marginBottom,
|
||||
int marginLeft, bool extraParagraphSpacing, bool hyphenationEnabled) const;
|
||||
void onPageComplete(std::unique_ptr<Page> page);
|
||||
|
||||
public:
|
||||
uint16_t pageCount = 0;
|
||||
int pageCount = 0;
|
||||
int currentPage = 0;
|
||||
|
||||
explicit Section(const std::shared_ptr<Epub>& epub, const int spineIndex, GfxRenderer& renderer)
|
||||
: epub(epub),
|
||||
spineIndex(spineIndex),
|
||||
renderer(renderer),
|
||||
filePath(epub->getCachePath() + "/sections/" + std::to_string(spineIndex) + ".bin") {}
|
||||
cachePath(epub->getCachePath() + "/" + std::to_string(spineIndex)) {}
|
||||
~Section() = default;
|
||||
bool loadSectionFile(int fontId, float lineCompression, bool extraParagraphSpacing, uint8_t paragraphAlignment,
|
||||
uint16_t viewportWidth, uint16_t viewportHeight, bool hyphenationEnabled);
|
||||
bool loadCacheMetadata(int fontId, float lineCompression, int marginTop, int marginRight, int marginBottom,
|
||||
int marginLeft, bool extraParagraphSpacing, bool hyphenationEnabled);
|
||||
void setupCacheDir() const;
|
||||
bool clearCache() const;
|
||||
bool createSectionFile(int fontId, float lineCompression, bool extraParagraphSpacing, uint8_t paragraphAlignment,
|
||||
uint16_t viewportWidth, uint16_t viewportHeight,
|
||||
const std::function<void()>& progressSetupFn = nullptr,
|
||||
const std::function<void(int)>& progressFn = nullptr, bool hyphenationEnabled = false);
|
||||
std::unique_ptr<Page> loadPageFromSectionFile();
|
||||
bool persistPageDataToSD(int fontId, float lineCompression, int marginTop, int marginRight, int marginBottom,
|
||||
int marginLeft, bool extraParagraphSpacing, bool hyphenationEnabled);
|
||||
std::unique_ptr<Page> loadPageFromSD() const;
|
||||
};
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user