Skip to content

VGG#

eqxvision.models.VGG #

A simple port of torchvision.models.vgg

__init__(self, cfg: List[Union[str, int]] = None, num_classes: int = 1000, batch_norm: bool = True, dropout: float = 0.5, *, key: Optional[jax.random.PRNGKey] = None) #

Arguments:

  • cfg: A list specifying the block configuration.
  • num_classes: Number of classes in the classification task. Also controls the final output shape (num_classes,). Defaults to 1000.
  • batch_norm : If True, then BatchNorm is enabled in the architecture.
  • dropout: The probability parameter for equinox.nn.Dropout.
  • key: A jax.random.PRNGKey used to provide randomness for parameter initialisation. (Keyword only argument.)
__call__(self, x: Array, *, key: jax.random.PRNGKey) -> Array #

Arguments:

  • x: The input. Should be a JAX array with 3 channels.
  • key: Required parameter. Utilised by few layers such as Dropout or DropPath.

eqxvision.models.vgg11(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 11-layer model (configuration "A") from Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg11_bn(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 11-layer model (configuration "A") with batch normalization Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg13(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 13-layer model (configuration "B") Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg13_bn(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 13-layer model (configuration "B") with batch normalization Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg16(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 16-layer model (configuration "D") Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg16_bn(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 16-layer model (configuration "D") with batch normalization Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg19(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 19-layer model (configuration "E") Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None

eqxvision.models.vgg19_bn(torch_weights: str = None, **kwargs: Any) -> VGG #

VGG 19-layer model (configuration 'E') with batch normalization Very Deep Convolutional Networks For Large-Scale Image Recognition. The required minimum input size of the model is 32x32.

Arguments:

  • torch_weights: A Path or URL for the PyTorch weights. Defaults to None