kernelkit.torch_support.AutogradOperator#
- class kernelkit.torch_support.AutogradOperator(*args, **kwargs)#
Autograd function for Operator objects.
- Attributes:
- dirty_tensors
- materialize_grads
- metadata
- needs_input_grad
- next_functions
- non_differentiable
- requires_grad
- saved_for_forward
- saved_tensors
- saved_variables
- to_save
Methods
__call__
(*args, **kwargs)Call self as a function.
backward
(ctx, grad_output)Define a formula for differentiating the operation with backward mode automatic differentiation.
forward
(ctx, input, op)Define the forward of the custom autograd Function.
jvp
(ctx, *grad_inputs)Define a formula for differentiating the operation with forward mode automatic differentiation.
mark_dirty
(*args)Mark given tensors as modified in an in-place operation.
mark_non_differentiable
(*args)Mark outputs as non-differentiable.
save_for_backward
(*tensors)Save given tensors for a future call to
backward()
.save_for_forward
(*tensors)Save given tensors for a future call to
jvp()
.set_materialize_grads
(value)Set whether to materialize grad tensors.
setup_context
(ctx, inputs, output)There are two ways to define the forward pass of an autograd.Function.
vjp
(ctx, *grad_outputs)Define a formula for differentiating the operation with backward mode automatic differentiation.
vmap
(info, in_dims, *args)Define the behavior for this autograd.Function underneath
torch.vmap()
.apply
mark_shared_storage
maybe_clear_saved_tensors
name
register_hook
register_prehook
- __init__(*args, **kwargs)#
Methods
__init__
(*args, **kwargs)apply
(*args, **kwargs)backward
(ctx, grad_output)Define a formula for differentiating the operation with backward mode automatic differentiation.
forward
(ctx, input, op)Define the forward of the custom autograd Function.
jvp
(ctx, *grad_inputs)Define a formula for differentiating the operation with forward mode automatic differentiation.
mark_dirty
(*args)Mark given tensors as modified in an in-place operation.
mark_non_differentiable
(*args)Mark outputs as non-differentiable.
mark_shared_storage
(*pairs)maybe_clear_saved_tensors
name
register_hook
register_prehook
save_for_backward
(*tensors)Save given tensors for a future call to
backward()
.save_for_forward
(*tensors)Save given tensors for a future call to
jvp()
.set_materialize_grads
(value)Set whether to materialize grad tensors.
setup_context
(ctx, inputs, output)There are two ways to define the forward pass of an autograd.Function.
vjp
(ctx, *grad_outputs)Define a formula for differentiating the operation with backward mode automatic differentiation.
vmap
(info, in_dims, *args)Define the behavior for this autograd.Function underneath
torch.vmap()
.Attributes
dirty_tensors
generate_vmap_rule
materialize_grads
metadata
needs_input_grad
next_functions
non_differentiable
requires_grad
saved_for_forward
saved_tensors
saved_variables
to_save