neuralop.layers.embeddings
.SinusoidalEmbedding
- class neuralop.layers.embeddings.SinusoidalEmbedding(in_channels: int, num_frequencies: int | None = None, embedding_type: str = 'transformer', max_positions: int = 10000)[source]
SinusoidalEmbedding provides a unified sinusoidal positional embedding in the styles of Transformers George, R., Zhao, J., Kossaifi, J., Li, Z., and Anandkumar, A. (2024) and Neural Radiance Fields (NERFs) Mildenhall, B. et al (2020).
- Parameters:
- in_channelsint
Number of input channels to embed
- num_freqsint, optional
Number of frequencies in positional embedding. By default, set to the number of input channels
- embedding{‘transformer’, ‘nerf’}
Type of embedding to apply. For a function with N input channels, each channel value p is embedded via a function g with 2L channels such that g(p) is a 2L-dim vector. For 0 <= k < L:
‘transformer’ for transformer-style encoding.
g(p)_k = sin((p / max_positions) ^ {k / N})
g(p)_{k+1} = cos((p / max_positions) ^ {k / N})
‘nerf’ : NERF-style encoding.
g(p)_k = sin(2^(k) * Pi * p)
g(p)_{k+1} = cos(2^(k) * Pi * p)
- max_positionsint, optional
Maximum number of positions for the encoding, default 10000 Only used if embedding == transformer.
- Attributes:
out_channels
out_channels: required property for linking/composing model layers
Methods
forward
(x)References
- Vaswani, A. et al (2017)
“Attention Is All You Need”. NeurIPS 2017, https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf.
- Mildenhall, B. et al (2020)
“NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis”. ArXiv, https://arxiv.org/pdf/2003.08934.
- forward(x)[source]
- Parameters:
- x: torch.Tensor, shape (n_in, self.in_channels) or (batch, n_in, self.in_channels)