10 KiB
uid |
---|
urp-user-render-requests |
Render Requests
To trigger a camera to render to a render texture outside of the Universal Render Pipeline (URP) rendering loop, use the SubmitRenderRequest
API in a C# script.
This example shows how to use render requests and callbacks to monitor the progress of these requests. You can see the full code sample in the Example code section.
Render a single camera from a camera stack
To render a single camera without taking into account the full stack of cameras, use the UniversalRenderPipeline.SingleCameraRequest
API. Follow these steps:
-
Create a C# script with the name
SingleCameraRenderRequestExample
and add theusing
statements shown below.using System.Collections; using UnityEngine; using UnityEngine.Rendering; using UnityEngine.Rendering.Universal; public class SingleCameraRenderRequestExample : MonoBehaviour { }
-
Create arrays to store the cameras and Render Textures that you want to render from and to.
public class SingleCameraRenderRequestExample : MonoBehaviour { public Camera[] cameras; public RenderTexture[] renderTextures; }
-
In the
Start
method, add a check to ensure thecameras
andrenderTextures
arrays are valid and contain the correct data before continuing with running the script.void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } }
-
Make a method with the name
SendSingleRenderRequests
and the return typevoid
within theSingleCameraRenderRequest
class. -
In the
SendSingleRenderRequests
method, add afor
loop that iterates over thecameras
array as shown below.void SendSingleRenderRequests() { for (int i = 0; i < cameras.Length; i++) { } }
-
Inside the
for
loop, create a render request of theUniversalRenderPipeline.SingleCameraRequest
type in a variable with the namerequest
. Then check if the active render pipeline supports this render request type withRenderPipeline.SupportsRenderRequest
. -
If the active render pipeline supports the render request, set the destination of the camera output to the matching Render Texture from the
renderTextures
array. Then submit the render request withRenderPipeline.SubmitRenderRequest
.void SendSingleRenderRequests() { for (int i = 0; i < cameras.Length; i++) { UniversalRenderPipeline.SingleCameraRequest request = new UniversalRenderPipeline.SingleCameraRequest(); // Check if the active render pipeline supports the render request if (RenderPipeline.SupportsRenderRequest(cameras[i], request)) { // Set the destination of the camera output to the matching RenderTexture request.destination = renderTextures[i]; // Render the camera output to the RenderTexture synchronously // When this is complete, the RenderTexture in renderTextures[i] contains the scene rendered from the point // of view of the Camera in cameras[i] RenderPipeline.SubmitRenderRequest(cameras[i], request); } } }
-
Above the
SendSingleRenderRequest
method, create anIEnumerator
interface with the nameRenderSingleRequestNextFrame
. -
Inside
RenderSingleRequestNextFrame
, wait for the main camera to finish rendering, then callSendSingleRenderRequest
. Wait for the end of the frame before restartingRenderSingleRequestNextFrame
in a coroutine withStartCoroutine
.IEnumerator RenderSingleRequestNextFrame() { // Wait for the main camera to finish rendering yield return new WaitForEndOfFrame(); // Enqueue one render request for each camera SendSingleRenderRequests(); // Wait for the end of the frame yield return new WaitForEndOfFrame(); // Restart the coroutine StartCoroutine(RenderSingleRequestNextFrame()); }
-
In the
Start
method, callRenderSingleRequestNextFrame
in a coroutine withStartCoroutine
.void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } // Start the asynchronous coroutine StartCoroutine(RenderSingleRequestNextFrame()); }
-
In the Editor, create an empty GameObject in your scene and add
SingleCameraRenderRequestExample.cs
as a component. -
In the Inspector window, add the camera you want to render from to the cameras list, and the Render Texture you want to render into to the renderTextures list.
![NOTE] The number of cameras in the cameras list and the number of Render Textures in the renderTextures list must be the same.
Now when you enter Play mode, the cameras you added render to the Render Textures you added.
Check when a camera finishes rendering
To check when a camera finishes rendering, use any callback from the RenderPipelineManager API.
The following example uses the RenderPipelineManager.endContextRendering callback.
-
Add
using System.Collections.Generic
to the top of theSingleCameraRenderRequestExample.cs
file. -
At the end of the
Start
method, subscribe to theendContextRendering
callback.void Start() { // Make sure all data is valid before you start the component if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length) { Debug.LogError("Invalid setup"); return; } // Start the asynchronous coroutine StartCoroutine(RenderSingleRequestNextFrame()); // Call a method called OnEndContextRendering when a camera finishes rendering RenderPipelineManager.endContextRendering += OnEndContextRendering; }
-
Create a method with the name
OnEndContextRendering
. Unity runs this method when theendContextRendering
callback triggers.void OnEndContextRendering(ScriptableRenderContext context, List<Camera> cameras) { // Create a log to show cameras have finished rendering Debug.Log("All cameras have finished rendering."); }
-
To unsubscribe the
OnEndContextRendering
method from theendContextRendering
callback, add anOnDestroy
method to theSingleCameraRenderRequestExample
class.void OnDestroy() { // End the subscription to the callback RenderPipelineManager.endContextRendering -= OnEndContextRendering; }
This script now works as before, but logs a message to the Console Window about which cameras have finished rendering.
Example code
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Rendering;
using UnityEngine.Rendering.Universal;
public class SingleCameraRenderRequest : MonoBehaviour
{
public Camera[] cameras;
public RenderTexture[] renderTextures;
void Start()
{
// Make sure all data is valid before you start the component
if (cameras == null || cameras.Length == 0 || renderTextures == null || cameras.Length != renderTextures.Length)
{
Debug.LogError("Invalid setup");
return;
}
// Start the asynchronous coroutine
StartCoroutine(RenderSingleRequestNextFrame());
// Call a method called OnEndContextRendering when a camera finishes rendering
RenderPipelineManager.endContextRendering += OnEndContextRendering;
}
void OnEndContextRendering(ScriptableRenderContext context, List<Camera> cameras)
{
// Create a log to show cameras have finished rendering
Debug.Log("All cameras have finished rendering.");
}
void OnDestroy()
{
// End the subscription to the callback
RenderPipelineManager.endContextRendering -= OnEndContextRendering;
}
IEnumerator RenderSingleRequestNextFrame()
{
// Wait for the main camera to finish rendering
yield return new WaitForEndOfFrame();
// Enqueue one render request for each camera
SendSingleRenderRequests();
// Wait for the end of the frame
yield return new WaitForEndOfFrame();
// Restart the coroutine
StartCoroutine(RenderSingleRequestNextFrame());
}
void SendSingleRenderRequests()
{
for (int i = 0; i < cameras.Length; i++)
{
UniversalRenderPipeline.SingleCameraRequest request =
new UniversalRenderPipeline.SingleCameraRequest();
// Check if the active render pipeline supports the render request
if (RenderPipeline.SupportsRenderRequest(cameras[i], request))
{
// Set the destination of the camera output to the matching RenderTexture
request.destination = renderTextures[i];
// Render the camera output to the RenderTexture synchronously
RenderPipeline.SubmitRenderRequest(cameras[i], request);
// At this point, the RenderTexture in renderTextures[i] contains the scene rendered from the point
// of view of the Camera in cameras[i]
}
}
}
}