Details
-
Improvement
-
Resolution: Fixed
-
Major
-
14.9-rc-1
-
None
Description
When exporting multiple wiki pages to PDF we have to:
1. render each wiki page to HTML
2. aggregate all the results in a single HTML web page
The first step is done in a low-priority background (daemon) thread on the server, so it shouldn't slow down the server, CPU-wise, but collecting the result of rendering each wiki page might require a lot of memory (because we need to keep also the XDOM trees in order to be able to compute the table of contents)
The second step is done in a web browser (either a remote headless Chrome or the user's own web browser). If the generated HTML page is very large then the browser may have problems loading it and generating the PDF. This can possibly drain a lot of resources (CPU and memory). CPU is used intensively for instance when paginating the content (with paged.js).
This means that users can kill their own web browser, or the one from the server (and possibly slow down XWiki) by performing large exports, either intentionally or by mistake. Ideally, we should enforce some limits.
Attachments
Issue Links
- is related to
-
XWIKI-20377 Compute the aggregated table of contents as we go instead of keeping the XDOMs in memory
- Open
- relates to
-
XWIKI-20376 Limit the number of PDF exports that can be done in parallel
- Closed
-
XWIKI-20881 Don't enforce the size limit on single page PDF exports
- Closed