Read the following sections if you suspect problems with Burn may be due to memory issues on the render nodes, or that render nodes do not meet the graphics card requirements for a certain job type.
Processing jobs that require a GPU
Some of the jobs created in your Creative Finishing application (for example, floating point jobs, such as unclamped colors in Action, directional RGB blur, radial RGB blur) require a GPU-accelerated graphics card in order to be rendered. While your workstation is equipped with a GPU-accelerated graphics card, and can render such jobs locally, your background processing network is unable to render these types of jobs if no Burn node is equipped with a GPU.
To see if a render node has the hardware capabilities to process jobs that require a GPU, use the verifyBurnServer script, Backburner Monitor, or Backburner Web Monitor.
If you attempt to submit a job that requires a GPU to a background processing network where no render node is equipped with a GPU, one of the following situations occurs:
To avoid further problems, before attempting to submit a job that requires a GPU to your background processing network, make sure at least one of the render nodes is equipped with a GPU, and that the BackburnerManagerGroupCapability keyword in the application’s init.cfg file is set up correctly.
Troubleshoot memory problems
This section explains how to diagnose and address problems that are caused by jobs submitted from workstations with more memory than the render node.
Inferno 2013.1, Flame 2013.1, Flint 2013.1, Smoke 2013.1, and Backdraft Conform 2013.1 are all 64-bit applications, and can thus make full use of up to 16 GB of memory.
As a general rule, render nodes should have the same amount of RAM as the Creative Finishing workstation you are sending jobs from.
A Burn server running on a render node equipped with less memory than what is installed on your Creative Finishing workstation, may fail when processing these jobs due to their higher memory demands. However, do not assume that every problem on render nodes with less memory than your workstation is exclusively caused by memory issues.
If you suspect that a render node has failed due to a job exceeding the node's memory capacity, check the logs:
- If you are running graphics on the render node, log in as root and open a terminal. Otherwise, just log in as root.
- Navigate to /usr/discreet/log. This directory contains logs of events for the Burn servers installed on the render node. You need to view the log created at the time the server failed. Identify the Burn log file from the time of the Burn server failure using one of the following methods:
- If the render node has just failed, look for the following file: burn<version>_<render_node_name>_app.log.
- If the render node failed previously and was brought back online, look for burn<version>_<render_node_name>_app.log.## created around the time of the render node's failure.
- Review the messages in the log file for entries similar to the following which may indicate that the render node was experiencing memory problems at the time of failure.
- [error] 8192 PPLogger.C:145 01/24/06:17:06:16.998 Cannot load video media in node "clip17" for frame 2
- [error] 8192 PPLogger.C:145 01/24/06:17:06:17.210 Out of memory for image buffers in node "clip6" (76480512 bytes).
- Increase your memory token.
- Next, check the Backburner Server log file /usr/discreet/backburner/log/backburnerServer.log from the time of the server failure, using the methods listed above.
- Review the messages in the Backburner Server log file in a text editor, looking for entries similar to the following:
These log entries confirm that a server failure occurred on the render node. Since you know the server failed around this time, you can deduce that the memory problem caused the Burn server to fail.
- [notice] 16387 common_services.cpp:45 01/24/06:17:06:10.069 Launching 'burn'
- [error] 16387 common_services.cpp:37 01/24/06:17:06:48.182 Task error: burn application terminated (Hangup)
- [error] 16387 common_services.cpp:37 01/24/06:17:06:48.182 burn application terminated (Hangup)
- Optional: Identify the workstation running the application that submitted the job, and then look at the Batch setup, timeline segment, or clip to try and determine why the Burn server failed. Knowing what factors caused the render node to fail may help you to gauge what jobs your render nodes can handle. It can also give you ideas about how to deal with this problem. Problems that cause the server to fail due to lack of memory on a render node, usually arise due to:
- The size of images used in a project. For example, projects using higher resolution HD, 2K, and 4K images require more memory to store and render than SD projects.
- The complexity of the effect sent for processing. For example, a complex Batch setup with many layers and effects requires more memory to render than a simple Batch setup.
Addressing Memory Issues
If servers on your render nodes are failing while processing jobs, increase the amount of RAM set aside for processing jobs. You must repeat this procedure on each render node on your network running the server.
To configure Burn to reserve a set amount of RAM for jobs:
- In a terminal, as root: /etc/init.d/backburner_server stop.
- In /usr/discreet/burn_<version>/cfg/init.cfg uncomment the MemoryApplication keyword. This keyword sets the amount of RAM in megabytes (MB) to be reserved for Burn jobs. This keyword is disabled by default so Burn can dynamically adjust the amount of RAM used for each job based on the resolution of the project. When you enable this keyword, Burn reserves the same amount of memory for each job regardless of the project's resolution.
- If necessary, change the value for the MemoryApplication keyword to set the amount of RAM (in MB) to be reserved for each Burn job up to 1400 (about 1.4 GB). For example: MemoryApplication 1024. Setting the MemoryApplication keyword so that the (total render node memory) - (value of MemoryApplication) is less than 2600 MB may adversely affect the stability of the render node.
- Save and close init.cfg and restart the Backburner Server on the render node by typing: /etc/init.d/backburner_server start.
- Optionally implement the following guidelines for processing Burn jobs. Although these guidelines are not mandatory, following them may help increase the success rate while processing Burn jobs on render nodes with limited memory resources.
- If you know that the size of images in your projects may cause render node failure, enforce guidelines about what can and cannot be sent to the Burn render nodes. For example, if you know that 2K and 4K images with Batch setups exceeding six layers may cause the render nodes to fail, ensure these setups are not sent to Burn.
- If you know that the complexity of the effects sent for processing may cause render node failure, simplify effects by creating multiple Batch setups or by processing memory-intensive effects locally. For example, if you know that complex Batch setups with multiple logic ops and colour correction may cause render nodes to fail, render these locally instead.
If, after following these guidelines, your render nodes still fail because of low memory, consider adding memory to the render nodes. Matching the amount of memory on the render nodes with the amount of memory found on your Creative Finishing workstation is the most effective solution to memory issues.